InfantAgent-Next: A Highly Modular Generalist Agent for Automated Computer Interaction

Bin Lei1, Weitai Kang2, Zijian Zhang1, Winson Chen1, Mimi Xie3, Mingyi Hong1, Yan Yan2, Caiwen Ding1
1University of Minnesota, Twin City 2University of Illinois Chicago 3The University of Texas at San Antonio

Abstract

This paper introduces INFANTAGENT-NEXT, a generalist agent capable of interacting with computers in a multimodal manner, encompassing text, images, audio, and video. Unlike existing approaches that either build intricate workflows around a single large model or only provide workflow modularity, our agent integrates tool-based and pure vision agents within a highly modular architecture, enabling different models to collaboratively solve decoupled tasks in a step-by-step manner. Our generality is demonstrated by our ability to evaluate not only pure vision-based real-world benchmarks (i.e., OSWorld), but also more general or tool-intensive benchmarks (e.g., GAIA and SWE-Bench). Specifically, we achieve 7.27% accuracy on OSWorld, higher than Claude-Computer-Use. Codes and evaluation scripts are open-sourced at https://github.com/bin123apple/InfantAgent.

Demo

OS World Outputs

Software Example 1 Screenshot

Visual Studio Code

Powerful code editor with intelligent codecompletion, debugging, and Git integration.

Software Example 2 Screenshot

Google Chrome

Fast, secure web browser with developertools and extension support.

Software Example 3 Screenshot

GNU Image Manipulation Program

Free and open-source raster graphics editorfor image editing and retouching.

Software Example 4 Screenshot

LibreOffice Writer

Brief description of Software D.

Software Example 4 Screenshot

LibreOffice Impress

Brief description of Software D.

Software Example 4 Screenshot

LibreOffice Calc

Brief description of Software D.

BibTeX

@misc{lei2025infantagentnextmultimodalgeneralistagent,
      title={InfantAgent-Next: A Multimodal Generalist Agent for Automated Computer Interaction}, 
      author={Bin Lei and Weitai Kang and Zijian Zhang and Winson Chen and Xi Xie and Shan Zuo and Mimi Xie and Ali Payani and Mingyi Hong and Yan Yan and Caiwen Ding},
      year={2025},
      eprint={2505.10887},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2505.10887}, 
}