Artificial intelligence (AI) has become a high priority among many of the technology companies. Salaries for recent graduates of AI engineering programs start at about $300,000. That’s because of the huge potential of AI to solve problems in entirely new ways. One of its characteristics is processing huge amounts of data and, through intensive computing and analysis, deducing meaningful patterns and predictions that can then be used to create new businesses and services.
A recent example comes from Verify, the health-tech division of Google. The company used AI to assess a person’s risk of disease. They examined scans of the back the eyes of nearly 300,000 patients and compared the images to each of their medical conditions taken from their health records. They then used an AI neural network analysis technique to compare the two sets of data and look for patterns connecting the patients’ eye scans to their health conditions.
Google found it was able to use a simple scan to accurately determine an individual’s health condition, including the risk of heart disease, whether or not they smoked, and their blood pressure. They found that by using the scans they could predict a patient’s likelihood of suffering a heart attack. This could eventually be done quickly and may replace the current practice that requires a blood test and physical exam.
The results were just published in a paper in the Nature journal Biomedical Engineering, after being shared before a peer review this past September. While the results were very exciting, more testing needs to be done before it can be used as a recommended technique.
Looking at our eyes to judge our health conditions is possible because the rear wall contains thousands of blood vessels that reflect the body’s conditions.
The algorithm developed by Google was then tested by comparing the retinal images of pairs of patients, one with a heart condition and the other without. It was correct 70 percent of the time, compared to 72 percent using the current methodology using a blood test.
For Google, the work shows how AI can be used for scientific discovery, just by looking at enough data to find cause and effect, even if the relationship is not fully understood or obvious.
Lily Peng, M.D., product manager and a lead on this project within Google AI, noted in the Google AI official blog, “Using deep learning algorithms trained on data from 284,335 patients, we were able to predict CV risk factors from retinal images with surprisingly high accuracy for patients from two independent data sets of 12,026 and 999 patients.”
“For example,” she wrote, “our algorithm could distinguish the retinal images of a smoker from that of a non-smoker 71 percent of the time, compared to a ~50 percent (i.e. random) accuracy by human experts.”
Peng continued, “Traditionally, medical discoveries are often made through a sophisticated form of guess and test – making hypotheses from observations and then designing and running experiments to test the hypotheses. However, with medical images, observing and quantifying associations can be difficult because of the wide variety of features, patterns, colors, values and shapes that are present in real images.”