- #1
- 22,196
- 6,881
I made the title generic, but it comes from an article: Autonomous chemical research with large language models
https://www.nature.com/articles/s41586-023-06792-0
From the article
This is so new that Google has no references to it.
My institution is heavily into AI/ML for 'doing science' and enhancing/promoting innovation.
I expect in the near term, humans are still needed to write the rules. AI will become more autonomous when it can write the rules itself, and manipulate digital systems and robotics.
https://www.nature.com/articles/s41586-023-06792-0
Abstract - we show the development and capabilities of Coscientist, an artificial intelligence system driven by GPT-4 that autonomously designs, plans and performs complex experiments by incorporating large language models empowered by tools such as internet and documentation search, code execution and experimental automation. Coscientist showcases its potential for accelerating research across six diverse tasks, including the successful reaction optimization of palladium-catalysed cross-couplings, while exhibiting advanced capabilities for (semi-)autonomous experimental design and execution. Our findings demonstrate the versatility, efficacy and explainability of artificial intelligence systems like Coscientist in advancing research.
From the article
In this work, we present a multi-LLMs-based intelligent agent (hereafter simply called Coscientist) capable of autonomous design, planning and performance of complex scientific experiments. Coscientist can use tools to browse the internet and relevant documentation, use robotic experimentation application programming interfaces (APIs) and leverage other LLMs for various tasks. This work has been done independently and in parallel to other works on autonomous agents23,24,25, with ChemCrow26 serving as another example in the chemistry domain. In this paper, we demonstrate the versatility and performance of Coscientist in six tasks: (1) planning chemical syntheses of known compounds using publicly available data; (2) efficiently searching and navigating through extensive hardware documentation; (3) using documentation to execute high-level commands in a cloud laboratory; (4) precisely controlling liquid handling instruments with low-level instructions; (5) tackling complex scientific tasks that demand simultaneous use of multiple hardware modules and integration of diverse data sources; and (6) solving optimization problems requiring analyses of previously collected experimental data.
This is so new that Google has no references to it.
My institution is heavily into AI/ML for 'doing science' and enhancing/promoting innovation.
I expect in the near term, humans are still needed to write the rules. AI will become more autonomous when it can write the rules itself, and manipulate digital systems and robotics.