Huan Wang, Shelby Heinecke, Juan Carlos Niebles, Caiming Xiong
TL;DR: We release xLAM, a series of LLMs optimized for function calling and AI Agents. It offers several variants designed to serve different application domains, from mobile usage to high-demand performance contexts. They show competitive performance across various key agent benchmarks.
In a traditional Reinforcement Learning (RL) framework, the notion of the “Agent” plays a key role. This framework comprises pivotal concepts such as:
The advent of Large Language Models (LLMs) soon led to their application in agent-related scenarios. It was discovered that with the right prompting, an LLM could generate structured text with high probability. Since the output is structured, it can be readily parsed into callable functions/actions. Particularly, if the environment can be depicted using text, all the observations and rewards can be encapsulated within the prompt. Instead of the conventional RL agent modeling the conditional action distributions, a generic LLM combined with an output parser could be employed to determine the next action.
Function-calling poses one of the most sought-after agent applications, where the agent is tasked with completing user commands via a series of function calls. These can typically involve a wide array of potential functions/APIs that might be utilized to help fulfill the user’s requirements. Each of these functions/APIs possesses distinct descriptions, arguments, and returns. The applicable functions are presented to the LLMs in the prompt, and the LLMs then opt for the appropriate functions based on the context and specific objective, choose the corresponding arguments, and obtain the output from the chosen functions.
The widespread appeal of function-calling applications necessitates enhanced LLMs. However, the downside is that generic LLMs are not specifically tailored to cater for function calling contexts. To address this, we have compiled one of the most extensive collections of function-calling environments and data, ensuring a uniform format across all datasets. The idea is, as more data from various function-calling environments are used to train any base foundation model, the model should, in theory, be able to adapt to unseen function calling environments.
We are launching three variants of xLAM to cater to different scenarios:
These three variants of xLAM models are designed for both single-turn and multi-turn application scenarios across diverse environments and benchmarks. Previously we released two versions of our xLAM models specifically trained for the single-turn function-calling tasks: xLAM-1b-fc-r and xLAM-7b-fc-r. Notably, xLAM-7b-fc-r held the second place on the previous Berkely Function Calling Leaderboard V1. Currently it is ranked #16 on the Berkeley Function Calling Leaderboard V2 Live. Its compact counterpart, xLAM-1b-fc-r, nicknamed the 'Tiny Giant,' features just 1 billion parameters, making it ideal for mobile applications.
Our xLAM series models perform consistently on several key benchmarking environments including ToolBench, Berkeley Function Calling Leaderboard, Webshop, and AgentBoard. Below is an overview of the results:
Due to the recent service interruption from ToolBench, the results of xLAM-8x22b-r on ToolBench are yet to come. Still it is clear to see the xLAM model series offer pretty comparable performance compared to OpenAI's GPT models with a much smaller model size. Notably, the xLAM-8x22b-r model tops the Berkeley Function Calling Leaderboard in our evaluation.
While smaller models can perform adequately in specific scenarios, we've observed that larger models generally achieve better overall accuracy. This improved performance, however, comes at the cost of increased computational resources and latency. Detailed performance results can be found in our arXiv paper.
Our function calling research assembled a diverse set of datasets, including notable ones such as ToolBench, Webshop, ToolApaca, HotpotQA, AlfWorld, APIBank, Mind2Web, AgentBoard, and AgentBench. In addition to these existing function-calling datasets, we enriched our data pool with synthetic data such as API-GEN, SpecTools (coming soon), and an in-house large action model simulation (LAM Simulator) environment.
All the datasets are combined and further unified using a consistent format for the model fine-tuning. For a more consistent performance, we proposed an unified format and processed all the data into the proposed format before the model training.
The unified data format is critical in the systematic augmentation and purification of our large-scale data set. Using a meticulously crafted augmentation pipeline, we enhanced our raw data through several techniques including Instruction and Format Diversification, Special Token Augmentation, Order/Sequence Shuffling, and Data Synthesis from Failure Case Analysis.
Through our experimentation, we have discerned that the mere volume of data does not guarantee superior model performance. Instead, the quality of the data is a pivotal factor. Datasets such as ToolBench, despite their extensive data quantity, are often noisy. Indiscriminately integrating these noisy data sets into the training phase can significantly diminish the data quality. For maintaining superior data integrity and control, we employed OpenAI's GPT models to purge samples from datasets beset by potential data quality issues.
Beyond existing environments and datasets, we constructed several pipelines and environments to generate synthetic data, which is then used to further enrich the function-calling training set. Along this line of efforts, we release multiple papers/reports including
We are excited to unveil a collection of large action models, named xLAM, designed to cater a diverse range of application scenarios, spanning from mobile devices to extreme performance-demanding applications. Our models show competitive performance across various major agent benchmarks. Alongside these models, we are providing comprehensive technical reports on the model training, evaluation, synthetic data generation, as well as detailed insights from our new environments and simulations.
Full author list: Jianguo Zhang∗, Tian Lan∗, Ming Zhu∗, Zuxin Liu∗, Thai Hoang∗, Shirley Kokane†, Weiran Yao†, Juntao Tan, Akshara Prabhakar, Zhiwei Liu, Haolin Chen, Yihao Feng,Tulika Awalgaonkar, Rithesh Murthy, Eric Hu, Zeyuan Chen, Ran Xu, Juan Carlos Niebles, Shelby Heinecke, Huan Wang, Silvio Savarese, Caiming Xiong