LLM-Based Autonomous Agents and Agentic Systems
Invitation Code: RFP-24-07

Research on LLM-Based Autonomous Agents and Agentic Systems

Knowledge-Driven and Self-Improving Systems

1. Lifelong Learning and Adaptation: Enable agents to continuously learn and adapt over time.

  • Hybrid Memory Systems: Develop memory architectures combining short-term and long-term memory.
  • Reinforcement Learning and Skill Acquisition: Techniques for guided skill acquisition and reinforcement learning.
  • Catastrophic Forgetting and Hierarchical Abstraction: Methods to prevent forgetting and manage complexity through hierarchical planning.
  • Neurosymbolic Integration: Combine neural networks with symbolic reasoning to enhance interpretability and robustness.

2. Agentic Frameworks

Frameworks for Multi-Agent Systems: Develop customizable, conversable agents and agent frameworks that can collaborate effectively with humans and other agents.

  • Human-Agent Interaction: Develop advanced methods for human-in-the-loop learning, enabling effective collaboration between humans and agents.
  • Neurosymbolic Frameworks: Implement neurosymbolic techniques within multi-agent frameworks to improve reasoning capabilities and facilitate better coordination among agents.

Use Cases:

  • Software Development and Coding: Utilize teams of agents for collaborative coding, debugging, and documentation.
  • Customer Support: Develop multi-agent systems to handle complex customer inquiries, provide real-time support, and enhance customer satisfaction.
  • Cybersecurity: Explore the use of agentic frameworks in identifying, mitigating, and responding to cybersecurity threats.
  • Data Analysis and Insights: Utilize agents to analyze large datasets, generate insights, and automate reporting processes.
  • Education and Training: Create agents that can serve as tutors, providing personalized learning experiences and continuous assessment.

3. Multimodal Integration

Multimodal Capabilities: Develop agents that integrate vision, language, and speech capabilities for comprehensive environmental interaction.

  • Multimodal Integration: Use methods to integrate multiple modalities, enabling more sophisticated interpretation and interaction capabilities.
  • Software Development: Apply agents to software development tasks including multi-modal requirements capturing via diagrams and text, visual programming, textual and visual representations of intent and output.
  • Visual Question Answering and Data Interpretation: Enhance agents' ability to understand and respond to visual inputs, facilitating tasks like visual question answering and data interpretation.

4. Ethical and Safety Considerations

Interpretable AI and Bias Mitigation: Ensure that agents provide explainable decisions and mitigate biases to operate ethically and transparently.

  • Safety Assurance: Leverage the interpretability of symbolic reasoning and the learning capabilities of neural networks to develop robust safety mechanisms that align agent actions with human values and goals.

Proposal Submission:

After a preliminary review, we may ask you to revise and resubmit your proposal. RFPs maybe be withdrawn as research proposals are funded, or interest in the specific topic is satisfied. Researchers should plan to submit their proposals as soon as possible.

General Requirements for Consideration, Proposal Details, FAQs

You can find the information by scrolling down to the bottom of the webpage: Research Gifts. If your questions are not answered in the FAQs, please contact research@cisco.com.

Constraints and other information

IPR will stay with the university. Cisco expects customary scholarly dissemination of results and hopes that promising results would be made available to the community without limiting licenses, royalties, or other encumbrances.