OpenAI announced these API updates 3 days ago:

  • new function calling capability in the Chat Completions API
  • updated and more steerable versions of gpt-4 and gpt-3.5-turbo
  • new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
  • 75% cost reduction on our state-of-the-art embeddings model
  • 25% cost reduction on input tokens for gpt-3.5-turbo
  • announcing the deprecation timeline for the gpt-3.5-turbo-0301 and gpt-4-0314 models
  • 𝕊𝕚𝕤𝕪𝕡𝕙𝕖𝕒𝕟@programming.devOPM
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    gpt-3.5-turbo with the 16k context can now fit about 20 printed pages in its context. This is a game changer for summarization and documentation-based question answerint applications. I tried it in the API playground ant it works really well!

    Function calling also seems very useful for tool-using apps. No more crossing fingers and hoping the LLM will return a syntactically valid call!