Large language models have enabled us to use AI to solve many a real-world problems. However, using AI in the medical field is a different ball-game altogether as it requires us to prioritize safety, equity, and fairness. In 2023, various foundational models in the medical domain remained at the forefront of advancements.
Here are top 5 models that promise significant impact on healthcare practices.
Med-PaLM 2
Med-PaLM is a language model that uses artificial intelligence to answer medical questions with high accuracy. It has been specifically designed and tested for the medical domain, using medical exams, research, and consumer queries. The latest version of Med-PaLM, called Med-PaLM 2, was unveiled at Google Health’s annual event in March 2023.
This model has an impressive accuracy rate of 86.5% on USMLE-style questions and can provide comprehensive and accurate answers to consumer health questions. Limited testing of Med-PaLM 2 will be conducted soon to explore potential use cases and gather feedback.
AlphaFold 2.3
AlphaFold, a cutting-edge AI system developed by DeepMind, has the ability to predict protein structures computationally with unparalleled accuracy and speed. In collaboration with EMBL’s European Bioinformatics Institute (EMBL-EBI), they have made available more than 200 million AlphaFold-generated protein structure predictions that are openly accessible to the scientific community worldwide.
There are predictions including almost all known cataloged proteins – offering the potential to significantly expand the knowledge of biology. AlphaFold is an AI-based protein-folding solution recognized by the Critical Assessment of Protein Structure Prediction (CASP) community. CASP challenges teams to predict protein structures using amino acid sequences for proteins with known 3D shapes.
Bioformer
Pretrained language models like Bidirectional Encoder Representations from Transformers (BERT) have shown impressive results in natural language processing (NLP) tasks. Recently, BERT has been adapted for the biomedical domain. However, these models have a high number of parameters, making them computationally expensive for large-scale NLP applications.
The creators of BERT hypothesized that reducing the number of parameters would not significantly affect its performance therefore, they developed Bioformer, which is a compact BERT model specifically designed for biomedical text mining. Bioformer uses a biomedical vocabulary and was pre-trained from scratch on PubMed abstracts and PubMed Central full-text articles.
The creators trained two Bioformer models – Bioformer8L and Bioformer16L – which reduced the model size by 60% compared to BERTBase.
RoseTTAFold All-Atom
RoseTTAFold is an accurate deep-learning program that models protein structures. It was designed for biomolecules made entirely of amino acids. In 2023, the new upgrade called RoseTTAFold All-Atom was introduced. With this upgrade, the program can model full biological assemblies that contain different types of molecules, including proteins, DNA, RNA, small molecules, metals, and other bonded atoms, including covalent modifications of proteins.
This upgrade is significant because proteins usually interact with other non-protein compounds to function correctly. With RoseTTAFold All-Atom, scientists can model how proteins and small-molecule drugs interact. This capability may be beneficial for drug discovery research.
ChatGLM-6B
It’s believed that training and deploying a dialogue model for hospitals is not feasible, which has hindered the use of LLMs in the medical industry. To address these issues, the developers collected databases of medical dialogues in Chinese with the help of ChatGPT and have used several techniques to train an easy-to-deploy LLM. Notably, the developers were able to fine-tune the ChatGLM-6B on a single A100 80G in just 13 hours, making it very affordable to have a healthcare-purpose LLM.
ChatGLM-6B generates an- swers that are aligned with human preference. Furthermore, we use low-rank adaptation (LoRA) to finetune ChatGLM with only 7 million trainable parameters. The fine-tuning process using all Chinese medical dialogue dataset was conducted using an A100 GPU for a du- ration of 8 hours.
The post 5 AI Models of 2023 that will Transform the Medical Landscape appeared first on Analytics India Magazine.