• stochasticferret@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I did some really simple experiments in a notebook with langchain and a few PDFs. It’s a neat technique.

    The first thing that jumped out at me was that the retrieval step is an upstream bottleneck on the LLM, so any methods that you can use to get better retrieval performance are fair game. Embeddings and vector databases are hot right now, but there’s no reason that you can’t augment them with traditional search methods.

  • BitSound@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    We’re using langchain, llamaindex, and weaviate with gpt-4. Overall works pretty well, I’m also trying out the beta for langsmith and the automatic visibility into what the LLM is doing is pretty nice. I’ve seen comments online saying langchain is unnecessary and you don’t need it, but integrations like langsmith are actually pretty handy.