ChatLLM23: Large Language Models and Generative AI Technologies Building F23, The University of Sydney Sydney, Australia, April 28, 2023 |
Conference website | http://ChatLLM23.com |
Submission link | https://easychair.org/conferences/?conf=chatllm23 |
Poster | download |
Abstract registration deadline | April 1, 2023 |
Submission deadline | April 1, 2023 |
Chat-LLMs @USyd Large Language Models, their impacts on society and reflections of our humanity.
A multi-disciplinary day of speakers, posters, and breakout discussions on the risks, challenges, and opportunities of language models and other generative AI technologies.
There is no doubt that Large Language Models (i.e., Chat-GPT, LaMDA, and MT-NLG) and associated text, image, and sound transformers (i.e., Dall-E, AudioLM, and Stable Diffusion) represent a significant technology shift that will, for better or worse, impact society on multiple levels. Scholars from what are often viewed as disparate fields are working on the many different challenges these complex technologies present. Despite numerous calls for the need for cross-disciplinary collaboration to better understand the potential risks and impacts of these technologies, the long-calcified tradition of siloed knowledge has proved a challenging hurdle to overcome. For instance, as with any nascent field of study or technology, terminology will be in-flux and often used in different ways by different disciplines. We see some disciplines attempt to reinvent and rediscover knowledges that have already been deeply explored by other disciplines. As well, specific methods and epistemologies that are common in some disciplines are opaque to scholars in traditionally non-aligned fields.
This symposium is aimed at bringing together academics, doctoral researchers, industry developers, and policy-makers to discuss the complex issues surrounding LLM and other generative AI technologies. Without strong cross-disciplinary collaboration, we become inefficient in managing the rapid changes in these technologies and their potential opportunities for, and impacts on, society. The multiple aspects of how LLMs are created, evaluated, fine-tuned, and deployed cannot be fully addressed by any one discipline. The discipline of Philosophy of Science is an ideal host to bring together scholars, developers, and policy-makers to discuss the design decisions behind LLMs, the validity of claims made about truth and knowledge, and the empirically studied impacts of these technologies on various human populations. Attendees and submitted abstracts are welcome from any field including (but not limited to): philosophy of science, mathematics, computer science, education, law, engineering, business, neuroscience, linguistics, psychology, philosophy, communications, history, information science, political science, economics, social science, development studies, and fine arts. Without better collaborative and interdisciplinary approaches to handling policy and deployment around these technologies, we are placing ourselves behind the proverbial 8 ball.
The event is also aimed at giving more voices a platform to discuss the opportunities and risks of LLMs. For instance, the current conversation in Australia around LLMs tends to be disjointed, often misrepresented by the media, and frequently dominated by a small handful of voices. Australia could be doing better to address these issues, bring more Australian contexts to the discussion, and better communicate the issues with the public. We have secured some established researchers in the field and welcome more, but we will also set aside space for early career academics, PhD students, and the voices of often marginalised people.
Some example topics include:
- Impacts on society.
- Scientific design decisions made in the creation of the models.
- Explainability and trustworthiness of LLMs.
- Prompt design issues and techniques.
- What these technologies tell us about our humanity.
- The use, reuse, and misuse of datasets for training and fine-tuning.
- How we evaluate these models and how that may be improved.
- Mathematical and empirical understandings of LLMs.
- SOTA and benchmark validity.
- Knowledge capacity of LLMs
- Impacts on specific groups and identities.
- The validity of claims made around these technologies.
- Policy guidelines on how we decide to deploy LLMs.
- Ethical issues of using annotators and reinforcement learning techniques.
- The use or banning of LLMs in educational environments.
- Impact on artists and music makers.
- What cognitive science can tell us about generative processes of LLMs.
- Legal issues surrounding copyright and use.
- Opportunities for use in health care settings.
- Connections or disconnections with Theory of Mind.
- Environmental impacts.
- LLMs as complex sociotechnical systems.
- Consequences of power imbalances caused by resources required for creation of LLMs.
- Fully open models versus fully closed models. Or, decentralised custom LLMs?
- LLMs in journalism and/or propaganda.
- LLMs as boundary objects between different groups of people.
This is an in-person event. Some of our invited speakers from Europe and the Americas will attend digitally, but all other speakers and attendees should plan to participate in person.
The day will consist of 20-minute speaker sessions (including 5 minutes of Q&A), panels, and breakout groups. Posters from scholars working on LLM and generative AI research will also be on display and can be viewed throughout the many breaks. Several break sessions will enable participants to continue discussions more informally and grow their networks with others working on the topic of LLMs. Attendees are expected to participate in the full day to receive and give the most benefit to the event.
The event is strictly limited to room and catering capacity. As we expect the event to be popular, early registrations are encouraged. Once capacity is reached a waitlist will be opened. Places are prioritised for accepted speakers and authors of poster submissions and breakout discussion session leaders.
The event will be held on Gadigal land of the Eora Nation, a site of the oldest continuous, living system of knowledge on the planet. The Gadigal word Putuwa means to "warm your hands by the fire and squeeze gently the fingers of another person" (Lille Madden, 2016) and provides us inspiration to bring diverse voices together to listen to each other as we discuss the multi-faceted issues of LLMs.
Website and registration for attendance is via bit.ly/ChatLLM23
Submission Guidelines
Submission is in the form of an abstract that is less than 500 words (excluding references).
Acceptance to a speaker position, poster inclusion, or breakout discussion leader, will be based on these abstracts. Additionally, the review panel will be seeking to select a diverse range of speakers in terms of subject, discipline, and career level.
Topics must be centred on Large Language Models and other Generative AI models. Consideration should be given to the fact that the audience will be from diverse disciplinary backgrounds.
The following categories are welcome:
- Abstracts of a proposed talk. Maximum 500 words. Speaker positions 20-minute blocks which should include 5 minutes for Q&A.
- Abstract for a poster. Maximum 500 words. Poster templates will be provided to accepted authors and will be A1 size (594mm x 841mm)
- Proposal for hosting a breakout session. Maximum 500 words. The afternoon workshop sessions will be run in concurrent breakout groups. Submissions to host a workshop with an outline of aims, methods, and expected outcomes are welcome.
All submissions must be made in PDF format. For abstract submission any standard referencing format is acceptable.
Submissions of text produced entirely by LLMs (i.e., “generated”) are prohibited. This does not prohibit authors from using LLMs for editing or polishing author-written text, but the use of any model must be acknowledged.
Published proceedings
After the close of the symposium, a second CFP will be opened for full papers under 5,000 words (for speakers) and under 3,000 words (for poster authors and attendees) for inclusion in the conference proceedings. The second CFP will run from 29th April and close on 24th May and will only be available to attendees of the symposium. Referencing format and detailed guidelines for full paper submission will be released closer to the date.
Committees
Program Committee
- Rebecca Johnson
- Dean Rickles
- Dominic Murphy
Organizing Committee
- Rebecca Johnson
- Anne Vervoort
- Angelica Breviario
- Jacob Hall
- Jay Pratt
Contact
One of :
rebecca.johnson@sydney.edu.au
anne.vervoort@sydney.edu.au
angelica.breviario@sydney.edu.au