Getting the data ready for AI is anything but simple. Healthcare data is often siloed across multiple environments, such as legacy applications, cloud environments, or on the edge. It can also be written in a variety of different coding languages, like CCDA, Hl7v2, CSV, FHIR, and even proprietary languages. Many EHR vendors store data in proprietary coding languages, for example. To top it off, these systems tend to be incompatible, making it difficult to get the data into one central data store where it can be fed to AI.
The data produced by the healthcare industry is also usually unstructured, such as clinical notes, discharge summaries, lab reports, imaging data from MRI scans, and so on. You generally can’t just give unstructured data to an algorithm and expect it to do anything useful, which is a shame because it holds some of the greatest value for healthcare organizations.
That’s why proper data preparation and data management are key to being successful with generative AI in healthcare. Here are some things you can do to get your data ready for generative AI.
First, you have to have data for your model to use, beginning with data collection. We’ve already noted how variegated healthcare data can be, and this means collecting it will probably be difficult. The best advice we can give you is to start your data harvesting efforts early, and to standardize everything as much as possible. Start with standard operating procedures and training for all staff who code data into the system.
You should also bear in mind that data is increasing at a staggering rate, so there could be far more data in a year than there is today, so plan ahead and make sure that you have adequate storage for all the data you want to process.
People associate “machine learning” with exciting buzzwords like “neural networks” and coders in hoodies training advanced models late into the night. This is true as far as it goes, but a lot of machine learning boils down to slightly less glamorous work like data cleaning.
Machine learning models are only as good as the data they’re trained on, so you must comb through your data to ensure it’s as high-quality as possible. This can involve translating date formats and data types for consistency, filling in missing information, dropping anomalies, and much more.
With a few exceptions like certain types of clustering, most artificial intelligence applications require labeled data. These labels are crucial to the process of learning statistical patterns in the data, and without them the models aren’t much use.
Labeling data is more or less exactly what it sounds like. If you’re training an image classifier that identifies which radiological scans contain tumors, for example, you’ll need images with cancer and others without cancer, and they’ll have to have labels like “cancer” and “no cancer”.
Be aware that data labeling on its own can be an extremely labor- and time-intensive process, so leave plenty of time for it if you’re going to create bespoke data for a model.
Once you’ve collected, cleaned, and labeled the data you’ll need to spend some time exploring it. Appropriately enough, this is known as “exploratory data analysis” (EDA), and it’s an important part of ensuring that the data is accurate.
A good way to start EDA is just to generate a bunch of summary statistics related to different parts of the data. You might look at averages for values like age or weight, for example, and then dive into specific metrics if anything looks amiss.
It’s also common to generate plots for the same purpose. Bar charts, line plots, histograms, and the like are all great ways of painting a picture with the numbers that are easier to understand and evaluate.
One strategy that more and more organizations are turning to is the use of dedicated data preparation offerings that simplify and streamline much of the work discussed in the previous few sections.
For data preparation, Amazon SageMaker Data Wrangler is a good choice. It facilitates exploration and visualization, can help identify anomalies, and supports many common data transformations. For extract, transform, and load, AWS Glue can help you discover, prepare, move, and integrate data from multiple sources to prepare it for analytics and AI.
No discussion of data preparation would be complete without a mention of data security, particularly in the context of healthcare. In the absence of proper security protocols AI models are vulnerable to corruption, theft, and data leakage. This can result in an AI exposing sensitive information to the wrong people, or producing faulty responses or recommendations.
Learn More: Top Generative AI Security Threats and How to Mitigate Them [Blog]
For many organizations, there's pressure to implement generative AI into products and solutions quickly. This can put a strain on internal team members, causing other company initiatives to slow or get dropped. Working with a third-party that specializes in AI can help you prepare your data and operationalize a solution faster, alleviating the burden on internal team members.
If you want to learn more download the FREE eBook Getting Started with Generative AI in Healthcare. Or schedule a free consultation to learn how our 30-day Generative AI for Healthcare Kickstarter can help you get started today.