• snooggums@midwest.social
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    6 months ago

    The seven models tested included GPT-4V, Gemini Pro and the open-source, 7B parameter versions of LLaVAv1, LLaVA-v1.6, MiniGPT-v2, as well as specialized models LLaVA-Med and CheXagent. These were chosen because their computational costs, efficiencies and inference speeds make them practical in medical settings, researchers explain.

    It seems like this is a case of “they just aren’t using AI right, if they used it right it works” when it sure looks like they are using the models intended for these specific medical tasks.

    • spaduf@slrpnk.net
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      6 months ago

      Those are not the sort of model anybody in the field would use (medical CV with deep learning based analysis is a vibrant field with many breakthroughs in recent years). These are the sort of models tech bros are trying to sell to the public as general AI. There is a world of difference.