Accelerating Your Model Evaluation and Fine-tuning with SFR-Judge

As the development and deployment of large language models (LLMs) accelerates, evaluating model outputs has become increasingly important. The established method of evaluating responses typically involves recruiting and training human evaluators, having them evaluate the model responses, and then auditing the quality of the evaluations. Unfortunately, this process does not

26 Sep 2024 •

Building Contextually Faithful RAG Applications with SFR-RAG

Retrieval Augmented Generation (RAG) has not only gained steam as one of the most invested areas of research in generative AI but also gathered considerable popularity and commercialization opportunities. RAG is typically applied to question-answering problems, where certain external contextual information retrieved from a data source (potentially private) is provided

17 Sep 2024 •