AEGAP-19: Second Workshop On Architectures And Evaluation For Generality, Autonomy & Progress Macao, China, August 10-16, 2019 |
Conference website | https://astro.temple.edu/~tuf66045/workshop/aegap2019/ |
Submission link | https://easychair.org/conferences/?conf=aegap19 |
Abstract registration deadline | April 12, 2019 |
Submission deadline | April 30, 2019 |
The proposed Second Workshop on Architectures and Evaluation for Generality, Autonomy and Progress in AI (AEGAP-19) focuses on the original grand dream of AI: the creation of autonomous agents with general intelligence comparable to or exceeding that of humans. “Generality” and “autonomy” are to be interpreted in their widest sense, the former: systems that can handle a wide variety of data, situations and tasks, that can explain themselves, and that are trustworthy; the latter: systems do not need constant attention and re-adjustment from their creators, can acquire knowledge cumulatively, and evaluate their own progress on complex tasks, to give some examples.
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference.
The AEGAP-19 IJCAI Workshop welcomes regular papers, short position papers, and papers accompanied by demonstrations of all relevant theoretical and technical aspects of the Workshop’s theme, including (but not limited by) the following topics:
- New theoretical insights relevant to generality and autonomy
- Analysis of design requirements for generality and autonomy
- New methodologies relevant to generality and autonomy
- Design proposals for cognitive architectures targeting generality and/or autonomy
- Analysis of the potential and limitations of existing and new approaches
- Synergies and integration of AI approaches
- New programming languages or architectural principles relevant to generality and autonomy
- Novel network architectures for generality and autonomy
- New learning and/or educational methods for generality and autonomy
- Analysis, comparisons and proposals of AI/ML benchmarks and competitions
- Tasks and methods for evaluating general and autonomous AI systems
- Unified theories for evaluating general intelligence and cognitive capability
- Evaluation of multi-agent systems in competitive and cooperative scenarios
- Better understanding of the characterization of task requirements and difficulty (energy, time, trials needed...) beyond algorithmic complexity
- The relation between autonomy and predictability
- Emergence of (symbolic) logic from neural networks
- Integration of top-down and bottom-up approaches (e.g. logic-based and neural-inspired)
Committees
Organizing committee
- Dr. Kristinn R. Thórisson
- Dr. Hiroshi Yamakawa
- Dr. Itsuki Noda
- Dr. Ryutaro Ichise
- Dr. Satoshi Kurihara
- Xiang Li
Contact
xiangliAGI@temple.edu