FLAIRS-39: THE 39TH INTERNATIONAL FLORIDA AI RESEARCH SOCIETY (FLAIRS) CONFERENCE
PROGRAM

Days: Sunday, May 17th Monday, May 18th Tuesday, May 19th Wednesday, May 20th

Sunday, May 17th

View this program: with abstractssession overviewtalk overview

13:30-15:00 Session 1A: Part 1: Game Theory and Reinforcement Learning: Two Perspectives, One Frontier

Dr. Prithviraj (Raj) Dasgupta, Section Head (Acting), Distributed Intelligent Systems Section, Naval Research Laboratory, Washington, DC, USA.

Abstract: Reinforcement learning (RL) is a widely used learning paradigm that has shown significant successes in solving many hard AI problems including mastering real-time strategy games, autonomous car driving, and LLM model alignment. The formal mathematical framework underlying many of the problems solved by RL is game theory. However, these two areas are taught, and usually researched, independently of each other. In this tutorial, I will attempt to bridge this gap by introducing the fundamental concepts in RL and game theory and draw parallels between RL algorithm concepts like value updates, credit assignment, advantage and policy convergence, and their counterparts in game theory like backward induction, Nash equilibrium and regret. We will use a 2-player game in an AI Gymnasium environment as a hands-on, working example to illustrate how these concepts are analyzed and solved in RL and game theory respectively.

Bio: Dr. Prithviraj (Raj) Dasgupta is a senior research scientist and Section Head (Acting) of the Distributed Intelligent Systems Section at the Naval Research Laboratory, Washington, DC. His research is in the area of artificial intelligence focusing on reinforcement learning, game theory and multi-agent systems. He has led several large, federally-funded research projects in these areas and has over 150 research publications in premier journals and conferences on these topics. From 2001-2019, Dr. Dasgupta was a tenured, full professor at the University of Nebraska, Omaha where he had established and led the robotics lab called CMANTIC, and, developed and taught several courses on game theory, multi-robot systems and machine learning. He has received several awards including the best researcher award called ADROCA at the University of Nebraska, and multiple best paper awards; he is a senior member of IEEE. He received his Ph.D. in Computer Engineering in 2001 from the University of California, Santa Barbara.

Location: Ballroom A
13:30-15:00 Session 1B: Part 1: Words As Weapons: Breaking AI and Agents; Then Securing Them

Pavan Reddy, The George Washington University

Abstract: As LLM systems move from prototypes into real products and research stacks, security and robustness are often underexamined relative to capability gains. This hands-on tutorial presents a code-first, Attack→Defense workflow for prompt injection in retrieval-augmented generation (RAG) and toolusing LLMs. Using a small Car Dealership web application (run in Google Colab or locally via Docker) and prepared notebooks, we reproduce three escalating scenarios and implement focused mitigations: (1) LLM→Database integration with direct and indirect prompt injection that manipulates database state; (2) EchoLeak-style indirect prompt injection for sensitive data exfiltration; and (3) injection-driven remote code execution via an LLM-controlled tool chain. Each module instruments minimal measurements (e.g., context recall, answer faithfulness, tool-call traces) and introduces lightweight defenses suitable for research and teaching (schema-validated tool calls, retrieval/source isolation, prompt/routing hardening). Attendees leave with runnable notebooks, a containerized demo app, drop-in attack and defense modules, and a repeatable evaluation workflow. This tutorial benefits scientists, applied researchers, and students who need rigorous, reproducible methods to analyze and improve LLM pipeline behavior.

Bio: Pavan Reddy is a software engineer at Automata, where he leads efforts in vulnerability management, FedRAMP ATO preparation, FIPS compliance, and the roadmap for extending security practices to AI systems. His work spans adversarial machine learning, LLM robustness, and RAG evaluation. He has delivered talks and hands-on tutorials at venues including SquadCon, ACM SIGCITE, FedCertWeek, CAPWIC, and BSidesNoVA, primarily on adversarial ML and AI security topics. He is also set to deliver a Lab session at AAAI 2026. Pavan maintains an active social-media presence sharing short-form ML/LLM content and live demos, and regularly publishes teaching materials. He combines applied engineering with research and academic experience, making him well-suited to deliver an entirely hands-on tutorial that emphasizes reproducibility, measurement, and practical experimentation for scientists and applied researchers.

Location: Ballroom B
15:30-17:00 Session 2A: Part 2: Game Theory and Reinforcement Learning: Two Perspectives, One Frontier

Dr. Prithviraj (Raj) Dasgupta, Section Head (Acting), Distributed Intelligent Systems Section, Naval Research Laboratory, Washington, DC, USA.

Abstract: Reinforcement learning (RL) is a widely used learning paradigm that has shown significant successes in solving many hard AI problems including mastering real-time strategy games, autonomous car driving, and LLM model alignment. The formal mathematical framework underlying many of the problems solved by RL is game theory. However, these two areas are taught, and usually researched, independently of each other. In this tutorial, I will attempt to bridge this gap by introducing the fundamental concepts in RL and game theory and draw parallels between RL algorithm concepts like value updates, credit assignment, advantage and policy convergence, and their counterparts in game theory like backward induction, Nash equilibrium and regret. We will use a 2-player game in an AI Gymnasium environment as a hands-on, working example to illustrate how these concepts are analyzed and solved in RL and game theory respectively.

Bio: Dr. Prithviraj (Raj) Dasgupta is a senior research scientist and Section Head (Acting) of the Distributed Intelligent Systems Section at the Naval Research Laboratory, Washington, DC. His research is in the area of artificial intelligence focusing on reinforcement learning, game theory and multi-agent systems. He has led several large, federally-funded research projects in these areas and has over 150 research publications in premier journals and conferences on these topics. From 2001-2019, Dr. Dasgupta was a tenured, full professor at the University of Nebraska, Omaha where he had established and led the robotics lab called CMANTIC, and, developed and taught several courses on game theory, multi-robot systems and machine learning. He has received several awards including the best researcher award called ADROCA at the University of Nebraska, and multiple best paper awards; he is a senior member of IEEE. He received his Ph.D. in Computer Engineering in 2001 from the University of California, Santa Barbara.

Location: Ballroom A
15:30-17:00 Session 2B: Part 2: Words As Weapons: Breaking AI and Agents; Then Securing Them

Pavan Reddy, The George Washington University

Abstract: As LLM systems move from prototypes into real products and research stacks, security and robustness are often underexamined relative to capability gains. This hands-on tutorial presents a code-first, Attack→Defense workflow for prompt injection in retrieval-augmented generation (RAG) and toolusing LLMs. Using a small Car Dealership web application (run in Google Colab or locally via Docker) and prepared notebooks, we reproduce three escalating scenarios and implement focused mitigations: (1) LLM→Database integration with direct and indirect prompt injection that manipulates database state; (2) EchoLeak-style indirect prompt injection for sensitive data exfiltration; and (3) injection-driven remote code execution via an LLM-controlled tool chain. Each module instruments minimal measurements (e.g., context recall, answer faithfulness, tool-call traces) and introduces lightweight defenses suitable for research and teaching (schema-validated tool calls, retrieval/source isolation, prompt/routing hardening). Attendees leave with runnable notebooks, a containerized demo app, drop-in attack and defense modules, and a repeatable evaluation workflow. This tutorial benefits scientists, applied researchers, and students who need rigorous, reproducible methods to analyze and improve LLM pipeline behavior.

Bio: Pavan Reddy is a software engineer at Automata, where he leads efforts in vulnerability management, FedRAMP ATO preparation, FIPS compliance, and the roadmap for extending security practices to AI systems. His work spans adversarial machine learning, LLM robustness, and RAG evaluation. He has delivered talks and hands-on tutorials at venues including SquadCon, ACM SIGCITE, FedCertWeek, CAPWIC, and BSidesNoVA, primarily on adversarial ML and AI security topics. He is also set to deliver a Lab session at AAAI 2026. Pavan maintains an active social-media presence sharing short-form ML/LLM content and live demos, and regularly publishes teaching materials. He combines applied engineering with research and academic experience, making him well-suited to deliver an entirely hands-on tutorial that emphasizes reproducibility, measurement, and practical experimentation for scientists and applied researchers.

Location: Ballroom B
15:30-17:00 Session 2C: Hardware Acceleration for Deep Learning: Present Limits and Future Directions

David Bisant, Central Security Service

Abstract: Deep learning neural models require hardware acceleration. The current thirst for this acceleration is exceeding current capabilities and reality.  At current trends, by 2045, one half of the world's electricity will be consumed by training deep learning models.  This tutorial will cover background and a history of the field, the acceleration which is currently available, and what is expected in the future. 

Bio: Dr. David Bisant has over 30 years of experience in neural networks, machine learning, and the application of these algorithms to problems in engineering and natural sciences. He has received training at Colorado State University, the University of Maryland, George Washington University, and Stanford University. He has held past positions at Medtronic and Stanford University. He is currently a member of the Central Security Service, where he works in the fields of high performance computing, physical science research, and defense.  He has been both a contributor and organizer of the FLAIRS Conference.  He has cochaired a number of special tracks, primarily the Neural Network and Data Mining Special Track, which he has cochaired for the last 16 years.

Location: Ballroom C
Monday, May 18th

View this program: with abstractssession overviewtalk overview

09:00-10:00 Session 3: Re-thinking Education in the Face of AI

Re-thinking Education in the Face of AI

William Swartout, Chief Science Officer, USC Institute for Creative Technologies

Abstract: When ChatGPT was released in the fall of 2022, the education community panicked.  Suddenly there existed a highly capable AI that was facile with language.  Teachers feared students would use Gen AI to cheat and write their essays for them, and the press and internet were rife with articles proclaiming the Death of the Term Paper.  Banning AI was problematic since preventing students from using AI while in school would not prepare them for the world into which they would graduate, and the detectors that purported to tell if text was written by an AI or a human had significant false positive and false negative error rates.  Working with faculty from the USC undergraduate writing program we developed a writing tool called ABE that takes a different approach.  In ABE, we use generative AI to help students brainstorm about a topic, and then when they are finished with their essays (which they write themselves) we use generative AI again, not as a writer, but as a reader, to read their essays and offer critiques, answering questions such as: Does the essay have a good hook?  Is there adequate support for the claims?  Are there other points of view that should be considered, but weren't?   Surveys have shown that students have received ABE very positively and have found it helpful in their writing.Stepping back a bit, I believe Gen AI is going to force us to reconsider how we teach across a very broad spectrum of intellectual domains.  While each domain presents its own challenges, I believe that our experience with ABE is an exemplar of how Gen AI can be integrated into instructional design to actually improve students' critical thinking skills rather than detract from them.

Bio: William Swartout is chief science officer at the USC Institute for Creative Technologies, providing overall direction to the institute’s research programs. He is also co-Director of the Center for Generative AI and Society and a research professor in the Computer Science Department at the USC Viterbi School of Engineering.Swartout has been involved in cutting edge research and development of artificial intelligence systems throughout his career. In 2009, Swartout received the Robert Engelmore Award from the Association for the Advancement of Artificial Intelligence (AAAI) for seminal contributions to knowledge-based systems and explanation, groundbreaking research on virtual human technologies and their applications, and outstanding service to the artificial intelligence community. Swartout is a Fellow of the AAAI, has served on their Board of Councilors and is past chair of the Special Interest Group on Artificial Intelligence (SIGART) of the Association for Computing Machinery (ACM).He has served as a member of the Air Force Scientific Advisory Board, the Board on Army Science and Technology of the National Academies and the JFCOM Transformation Advisory Group. Prior to helping found the ICT in 1999, Swartout was the Director of the Intelligent Systems Division at the USC Information Sciences Institute. His particular research interests include virtual humans, natural language processing, particularly explanation and text generation, knowledge acquisition, knowledge representation, and intelligent computer based education. He received his Ph.D. and M.S. in computer science from MIT and his bachelor’s degree from Stanford University.

Location: Ballroom (Full)
10:30-12:00 Session 4: Poster Session

Poster Session

Location: Ballroom Foyer
ShZZaM: An LLM+ATP Natural Language to Logic Translator (abstract)
HARD-Xception: A Hybrid Adversarially Robust Deepfake Detection Framework Using Frequency Decomposition and Feature Consistency Learning (abstract)
Scalable Clinical Informatics Frameworks for AI-Enabled Assistive Systems in Mental Health Care (abstract)
Semantic Length Limits in LLM Based Steganography (abstract)
Explaining Why Instrumental Rationality is Insufficient for Ethical Behavior (abstract)
Emerging AI Trends: A 2025-2026 Synthesis (abstract)
PRESENTER: Maikel Leon
Interactive Solution Viewers for Automated Theorem Proving (abstract)
Seeing the Spark Before the Flame: Wildfire Risk Detection via UNets (abstract)
A Preliminary Empirical Study of Large Language Models for Grading Debugging Problems in Programming Education (abstract)
Can LLMs Classify Vehicular Basic Safety Messages Anomalies? (abstract)
From Recommendation to Reflection: Measuring Moral Value Stability in Human–AI Collaboration Using Cognitive Value Recontextualization (abstract)
CultIcon-Bench: A Pilot Benchmark for Cultural Interpretation of Visual Icon (abstract)
Classifying Target Sentences for LLM-Generated Persuasion Attacks in Press Releases from Federal Research Agencies (abstract)
Collaboration on Waltz Labels can Achieve Qualitative Stereo Vision (abstract)
Counting Constraints in POMDPs based on PID Controllers (abstract)
Non-Stationary Spectral Decomposition Network for Econometric Time Series Forecasting (abstract)
The Judge Effect in Two-Round Legal Debate on LegalBench (abstract)
Domain-Specificity of Refusal Representations in Large Language Models (abstract)
Improving RAG/CAG Based Additional Context Retrieval from Datasets implementations via Pokemon-themed AI Chatbot (abstract)
Improving LLM Thematic Analysis through Metric-Driven Self-Correction (abstract)
The Submittals Agent: A Hybrid Workflow for Automating Submittal Extraction from Construction Specifications (abstract)
LLM-Augmented Clustering for Customer Support Ticket Triage (abstract)
Do LLMs Outperform Fine-tuned Transformers in Emotion Classification? A Case Study of Llama and RoBERTa on an Emotion Benchmark (abstract)
PRESENTER: Tim Meinert
Scalable GNN Training for Track Finding (abstract)
A Relational Model for Fine-Grained Visual Classification (abstract)
A Comparative Evaluation of Document Extraction Tools for Construction Specification Parsing (abstract)
Blockchain as a Tool for Ensuring Authenticity: Combating Fake AI-Generated Content and Misinformation (abstract)
Codify: An Intelligent Socratic Tutoring System for Programming Education (abstract)
Semantic Conversational AI for Construction Cost Analytics (abstract)
Machine Learning for Hypertension Prediction in U.S. University-Aged Students: Insights from NIH All of Us Data (abstract)
Using a chat interface for a data-driven course planning wizard (abstract)
Directional Relations in Complex Word Embeddings (abstract)
Automated IoT Threat Monitoring & Mitigation using Tiny LLMs (abstract)
Multimodal Machine Learning for Student Retention Prediction: Integrating Temporal, Textual, and Tabular Features (abstract)
Reward-Guided Fine-Tuning of Language Models with Social Feedback (abstract)
PRESENTER: Jared Scott
Comparative Study of Different Learning Paradigms for Zero-Shot Sentiment Analysis of the Low-Resource African Language Oromo (abstract)
SAGE 0.2: LLMs for DOM Informed Internet Guidance (abstract)
Automatic Translation from LIME to Clinically Meaningful Triage Explanations (abstract)
InsightBoard: An Interactive Multi-Metric Visualization and Fairness Analysis Plugin for TensorBoard (abstract)
Ghost Agents in SAT-based Models for Multi-Agent Pathfinding (abstract)
13:30-15:00 Session 5A: Main Track I

Main Track I

Location: Ballroom A
13:30
ASP∀: An Open Educational Resource for Answer Set Programming. (abstract)
13:50
A Pseudo-Boolean Formulation for Graph Database Queries (abstract)
14:10
S3FC: Scalable Sparse Spectral Fusion Clustering for Multi-Manifold Data (abstract)
14:30
Dwell Time Estimation Using Periodic Image Captures and Deep Learning (abstract)
14:50
Feasibility of Tiny Recursion Models for the Traveling Salesman Problem: Learned Insertion and 2-opt Refinement (abstract)
13:30-15:00 Session 5B: Applied Natural Language Processing I

Applied Natural Language Processing I

Location: Ballroom B
13:30
Systematic Analysis of Tokenization Properties in Low-Resource Polysynthetic NMT (abstract)
13:50
An Elicitation-Matrix Approach to Pragmatic Context Modeling in Low-Resource Machine Translation: The Case of Akuapem Twi (abstract)
14:10
Domain-Adapted NLP for Multi-Label Crash Narrative Classification under Extreme Class Imbalance (abstract)
14:30
Monitoring therapeutic plans and risk signals from clinical narratives in mental health using natural language processing (abstract)
14:40
Fine-Grained Sentence-Level Propaganda Detection in News Articles (abstract)
13:30-15:00 Session 5C: Semantics, Logics, Information Extraction and AI 1

Semantics, Logics, Information Extraction and AI 1

Location: Ballroom C
13:30
The effect of decomposition rule modeling on the efficiency of hierarchical planners (abstract)
13:50
On Using Domain Control Knowledge in Planning: Position Paper (abstract)
14:10
BDI Agent-Based Access Control Reasoning for Multimodal Retrieval-Augmented Generation (abstract)
PRESENTER: Halil Yesil
14:30
JSON-LD 1.2 and Beyond: Extensions for Machine Learning Data Exchange (abstract)
14:50
Dynamic Conditional Logic: A Complete Axiomatization of Update, Retraction, and Minimal Change (abstract)
13:30-15:00 Session 5D: AI in Games, Serious Games, and Multimedia

AI in Games, Serious Games, and Multimedia

Location: Heron
13:30
Sudoku Sage: Evaluating Correctness of LLM-Generated Moves as a Constraint Satisfaction Task (abstract)
13:50
Computational Models of Player Strategy in Roguelike Games (abstract)
14:10
When to Measure: A Multi-Agent Reinforcement Learning Approach for Efficient Tracking (abstract)
14:30
A Multi Attribute Extension of MDFT (abstract)
15:30-17:00 Session 6A: Main Track 2

Main Track 2

Location: Ballroom A
15:30
Prediction of Solar Flares Using Photospheric Magnetic Field Parameters with Deep Learning (abstract)
15:50
Forecasting Geomagnetic Disturbances with Interpretable Deep Learning (abstract)
16:10
Fragment-Based AI for Antibiotic Discovery (abstract)
16:30
FuseGO: Evaluating Embedding Fusion Across Species with Unequal Encoder Capacity for Automated Protein Function Prediction (abstract)
16:50
Graph-Based Modeling of Iceberg Dynamics from Synthetic Aperture Radar Imagery (abstract)
15:30-17:00 Session 6B: Applied Natural Language Processing 2

Applied Natural Language Processing 2

Location: Ballroom B
15:30
Propasafe-Hybrid: A Text-Based Hybrid Propaganda Detection Tool (abstract)
PRESENTER: Avijit Roy
15:50
MLSD: A Novel Few-Shot Learning Approach to Enhance Cross-Target and Cross-Domain Stance Detection (abstract)
16:10
The Role of Emotions: Investigating Communicative Roles in Models and Data for Emotion Recognition (abstract)
PRESENTER: Timothy Meinert
16:30
A Visualization of Explainable Stylometry of Presidential Speech and Writing (abstract)
15:30-17:00 Session 6C: Semantics, Logics, Information Extraction and AI 2

Semantics, Logics, Information Extraction and AI 2

Location: Ballroom C
15:30
Beyond Accuracy: Performance and Behavioral Evaluation of Multimodal AI for Suspicious Aerial Traffic Monitoring (abstract)
15:50
A Narrative-Driven Computational Framework for Clinician Burnout Surveillance (abstract)
16:10
Winning Isn’t Reasoning: Evaluating Iterative Reasoning Updating in Language Models (abstract)
16:30
Implementing Nonmonotonic Reasoning From Weakly Consistent Conditional Belief Bases (abstract)
15:30-17:00 Session 6D: Human-AI Collaboration and Augmented Intelligence 1

Human-AI Collaboration and Augmented Intelligence 1

Location: Heron
15:30
Do Programmers and AI See the Same Problem? Quantifying Cognitive Misalignment in Code Generation (abstract)
PRESENTER: Yi Zhang
15:50
The Robot Maze Test: An Evaluation of Situated Learning for Humans and Machine Agents (abstract)
16:00
AstroAid: Personalized Target Down-Selection for Amateur Astronomers (abstract)
16:10
A Study on How Well LLMs Can Assist Novices with Code Comprehension Tasks (abstract)
Tuesday, May 19th

View this program: with abstractssession overviewtalk overview

09:00-10:00 Session 7: Explainable Neural Text Classification

Explainable Neural Text Classification

Diana Inkpen, Professor, School of Electrical Engineering and Computer Science,University of Ottawa, Canada

Abstract:  Advances in Large Language Models (LLMs) allow us to develop highly accurate neural text classifiers. One of their major disadvantages is their lack of explainability, due to their black box nature. I am looking into neural text classifiers that are explainable, in order to open their black box architecture, at least partially. Explainability can come at the level of the classification model or at the level of the decision made for each new test data. The explanations need to look into what was learnt from the training data (unless there is no training or minimal training) and also into the pre-trained model (LLM) that was used as a basis for the classifier. To explain the individual decisions for each test data, one step is to calculate feature importance with methods such as LIME, SHAP, or Integrated Gradients. More useful full-text explanations can be generated via customized prompting, or via joint leaning of classes and explanations during training. I will show results for two case studies: applications to legal text mining and to mental health text mining. The evaluation of the generated explanations is done via automatic measures, as well as with human judges, in order to see if they find the explanations relevant and useful.

Bio:  Diana Inkpen is a Professor at the University of Ottawa, in the School of Electrical Engineering and Computer Science. She received her Ph.D. in Computer Science from the University of Toronto, Canada, and her M.Sc. and B.Eng. in Computer Science and Engineering from the Technical University of Cluj-Napoca, Romania. Her research is in applications of Natural Language Processing and Deep Learning. She is the editor-in-chief of the Computational Intelligence journal and the associate editor for the Natural Language Engineering journal. She published a book on Natural Language Processing for Social Media (Morgan and Claypool Publishers, Synthesis Lectures on Human Language Technologies, the third edition appeared in 2020), 11 book chapters, more than 45 journal articles, and more than 150 conference papers. She has received many research grants, the majority of which include intensive industrial collaborations.

Location: Ballroom (Full)
10:30-12:00 Session 8A: Main Track 3

Main Track 3

Location: Ballroom A
10:30
Impact of AI-Generated Queries on SQL Code Complexity (abstract)
PRESENTER: Janka Pecuchová
10:50
ACE-TA: An Agentic Teaching Assistant for Grounded Q&A, Quiz Generation, and Code Tutoring (abstract)
11:10
Effects of Personalization in Large Language Model Tutors on Cognitive Load during Mathematics Learning (abstract)
11:30
Evaluating Personalized Content Using Large Language Models (abstract)
11:50
Evaluating Logical Structure in Computer Programs Using LLMs (abstract)
10:30-12:00 Session 8B: Applied Natural Language Processing 3

Applied Natural Language Processing 3

Location: Ballroom B
10:30
Automatic Root-Cause Chain Extraction from Technician Maintenance Notes Using NLP and LLM Reasoning (abstract)
10:50
RAMP: Exploring the Feasibility of Detecting Physics Student Misconceptions in Writing Assignments Using Large Language Models (abstract)
11:10
Operationalization-Aware Modeling of Software Non-Functional Requirement Relationships: A Context-Aware Approach (abstract)
PRESENTER: Unnati Shah
11:30
An Iterative Self Correcting Agentic RAG System (abstract)
10:30-12:00 Session 8C: AI in Healthcare Informatics 1

AI in Healthcare Informatics 1

Location: Ballroom C
10:30
Multi-Label Heart Disease Classification Using Electrocardiograms and Machine Learning (abstract)
10:50
A Comparative Study of Deep Learning Architectures for Multi-Label Electrocardiogram Classification (abstract)
11:10
Real-Time Neck Posture Classification Using a Lightweight Wearable IMU Pendant (abstract)
11:30
A Unified Atlas-Aware Framework for Interpretable Spatio–Temporal EEG Source Imaging (abstract)
11:50
Multimodal Chest Pathology Classification with Language and Image Transformers (abstract)
10:30-12:00 Session 8D: Neural Networks and Data Mining

Neural Networks and Data Mining

Location: Heron
10:30
LLM Pruning with Elastic Net Enhanced Wanda Strategy (abstract)
10:50
Shop-The-Room: A Zero-Shot Foundation Model Framework for Visual Discovery in E-Commerce (abstract)
11:10
When One Model Is Not Enough: Twin Training for Prioritized Decisions (abstract)
11:30
An approach to Dimensionality Reduction based on Constrastive Learning: a Preliminary Analysis (abstract)
11:50
Learning Team Synergy from Team Composition with a Siamese Transformer (abstract)
13:30-15:00 Session 9A: Main Track 4

Main Track 4

Location: Ballroom A
13:30
Bridging Expectation Signals: LLM-Based Experiments and a Behavioral Kalman Filter Framework (abstract)
13:50
Integrating Large Language Models as Cognitive Agents into the GAMA Platform for Urban Mobility Simulation (abstract)
14:10
Investigating Human-Aligned Large Language Model Uncertainty (abstract)
14:30
From Tokens to Ties: Network and Discourse Analysis of Web3 Ecosystems (abstract)
14:50
Evaluating Synthetic Sentence Coherence Using a Large Language Model (abstract)
13:30-15:00 Session 9B: Responsible NLP for High-Stakes Social Media Signals

From Hallucinations to Hybrid Interpretability: Responsible NLP for High-Stakes Social Media Signals

Bonnie J. Dorr, Professor, University of Florida

Abstract: Generative AI is powerful but often unreliable in high-stakes settings, where hallucinations and overconfidence can cause real harm. This talk presents a responsible NLP framework for analyzing sensitive social media signals through hybrid interpretability, ambiguity-aware inference, and privacy-first pipelines—treating mental-health-adjacent detection as a stress test for trustworthy language AI.

Bio: Dr. Bonnie Dorr is a Professor in the Department of Computer & Information Science & Engineering at the University of Florida and Director of the NLP & Culture Laboratory. Her expertise spans trustworthy and interpretable NLP, ambiguity-aware inference, and responsible language technologies for high-stakes applications, including social media analysis. At FLAIRS, she would contribute technical depth in responsible AI and hybrid interpretability, helping broaden FLAIRS’ coverage of cutting-edge NLP while strengthening its focus on real-world deployment, evaluation rigor, and societal impact.

Location: Ballroom B
13:30-15:00 Session 9C: AI in Healthcare Informatics 2

AI in Healthcare Informatics 2

Location: Ballroom C
13:30
Metadata Engineering: Harmonizing CT Descriptors in Enterprise Imaging Systems (abstract)
13:50
A Deep Learning Framework for Automatic Multi-View Facial–Nasal Landmark Detection in Clinical Photographs (abstract)
14:10
Comparing Explanations of Competing Clinical Classification Algorithms (abstract)
14:30
Skull-Conditioned Facial Soft-Tissue Reconstruction Using Anatomy-First Deep Volumetric Inference (abstract)
14:40
UCMUNET Liver: Unified Cross-Modality 3D U-Net to Enhance Liver Segmentation in Cirrhotic Patients (abstract)
14:50
Clinical Narratives Matter: Feature-Level Fusion for Improving ICU Length-of-Stay Prediction (abstract)
13:30-15:00 Session 9D: Security, Privacy and Ethics in AI 1

Security, Privacy and Ethics in AI 1

Location: Heron
13:30
Document, Verify, Explain: A Transparent Accountability Framework for Equitable Generative AI Use in Computer Science Education (abstract)
PRESENTER: Angel Rivera
13:40
Improving Resilience Against Cyber-attacks via Reward-Shaped Reinforcement Learning in a Network Defense Game (abstract)
14:00
Evaluating Mistral 7B Instruct Jailbreak Vulnerabilities (abstract)
PRESENTER: Sina Jamshidi
14:20
Knowledge-Augmented Large Language Models for Automated Characterization of Cybersecurity Vulnerabilities (abstract)
14:40
AI-Driven Cyber Defense: Advanced Multimodal Learning for Evolving Malware Threats (abstract)
14:50
How the Architectural Design of the Detection Model Can Enhance the Effect of Adversarial Patches (abstract)
15:30-17:00 Session 10A: Main Track 5

Main Track 5

Location: Ballroom A
15:30
Lipschitz-Regularized Critics Lead to Policy Robustness Against Transition Dynamics Uncertainty (abstract)
15:50
Fast and Flexible Sampling-Based Local Replanning for Single-Query Paths in Unknown Environments (abstract)
16:10
Robotic Fall Prediction with Spatio-Temporal Processing of Egocentric Vision and Proprioception (abstract)
16:30
Have (A)I Seen this Before? Exploring LLM Metacognition Using Self-Reported Rankings and Scoring (abstract)
16:50
Learning General CP-nets Using Simulated Annealing (abstract)
15:30-17:00 Session 10B: Explainable, Fair, and Trustworthy AI 1

Explainable, Fair, and Trustworthy AI 1

Location: Ballroom B
15:30
A Seven-Layer Lifecycle Framework for Fair, Robust, and Safe AI: Guidance and a German Credit Case Study (abstract)
15:50
A Landscape of Trustworthy AI Frameworks and Metrics: Mapping to the NIST AI Risk Management Framework (abstract)
16:10
Fairness Implications of Data Minimization in Deep Collaborative Filtering (abstract)
16:30
Automated Identification of Lexical Misalignment in Preference-Stage Learning across Large Language Model Families (abstract)
15:30-17:00 Session 10C: AI in Healthcare Informatics 3

AI in Healthcare Informatics 3

Location: Ballroom C
15:30
Evidence-Grounded Verification of Oncology Clinical Notes Using Structured EHR Data (abstract)
15:50
OncoMark: A Two-Stage Gated Framework for Cancer Hallmark Detection from Biomedical Text (abstract)
16:10
An Exploratory Study of Agentic Retrieval Augmented Generation for Mental Health Oriented Language Models (abstract)
16:30
Reliability Beyond Accuracy: Error Analysis of Agentic Tool-Augmented Reasoning in LLMs on CURE-Bench (abstract)
15:30-17:00 Session 10D: Security, Privacy and Ethics in AI 2

Security, Privacy and Ethics in AI 2

Location: Heron
15:30
Steganography with Large Language Models: Key Sensitivity Analysis (abstract)
15:50
Forgetting by Design: Testing the Effectiveness of Machine Unlearning in Right to Be Forgotten Data Deletion (abstract)
16:10
Approximate Decryption in Homomorphic Division and Privacy impact (abstract)
16:30
A Scalable Approach to Solving Simulation-Based Network Security Games (abstract)
16:50
Generation and Validation of Configuration Management Code for Cyber Range Environments Using Large Language Models (abstract)
Wednesday, May 20th

View this program: with abstractssession overviewtalk overview

09:00-10:00 Session 11: AI Systems That Think, Team, and Fight: A New Paradigm for Defense

AI Systems That Think, Team, and Fight: A New Paradigm for Defense 

Svitlana Volkova, Chief of AI, Office of Science and TechnologyAptima, Inc.

Abstract:  As AI systems become increasingly capable, the Department of War faces a critical challenge: how do we develop, rigorously evaluate, and safely deploy multi-agent AI frontier systems across domains ranging from multimodal knowledge discovery to cognitive warfare? This talk presents lessons learned from building compound AI architectures that orchestrate large language models, vision-language models, and specialized agents through retrieval-augmented generation and agentic AI workflows. I will demonstrate how these systems enable cross-disciplinary knowledge synthesis for biosecurity, cognitive warfare planning and execution, and operator-AI team optimization in wargaming and readiness applications. Finally, I will present our emerging capabilities in multi-domain wargaming, where cognitively inspired AI agents execute doctrine-based maneuvers across air, space, cyber, and information domains.Evaluating these systems requires moving beyond traditional AI benchmarks. I will present our multi-dimensional ecosystem combining quantitative measures, qualitative SME assessments scaled through simulated domain expert agents, and causal investigations using structure learning algorithms to understand "why" behaviors emerge and "how" interventions affect mission outcomes. For safety evaluation, we examine human-agent-environment interactions holistically addressing alignment failures, emergent capabilities under distributional shift, and systemic risks from multi-agent coordination through counterfactual "what-if" analysis and continuous monitoring. The era of scientifically grounded operationally validated human-AI team optimization has begun, and this talk charts the path forward for defense applications.

Bio:  Dr. Svitlana Volkova is Chief of AI at Aptima, Inc., where she sets the company's AI vision and leads a portfolio of advanced research programs in compound frontier AI systems, human-AI teaming, and AI Test and Evaluation for national defense. A recognized thought leader in AI for national security, she has shaped the technical direction of multi-million-dollar federal research initiatives with a focus on transitioning AI technologies to operational use. Her pioneering work spans multimodal frontier models, agentic AI architectures, human digital twins, and causal AI/ML—with a focus on decision advantage, readiness, and cognitive warfare applications. Dr. Volkova has authored 100+ publications with 4,900+ citations, delivered keynotes and invited talks at premier venues spanning AI research (AAAI, ACL, EMNLP), defense (I/ITSEC, MODSIM, INFOPAC), academia (Stanford, CMU), and industry (Google Research, Amazon), and served as a trusted advisor to government leadership on AI strategy. Prior to Aptima, she led AI research initiatives at Pacific Northwest National Laboratory and conducted research at Microsoft Research. She holds a PhD in Computer Science from Johns Hopkins University.

Location: Ballroom (Full)
10:30-12:00 Session 12A: Main Track 6

Main Track 6

Location: Ballroom A
10:30
Multi-Stream Fusion of Spatial, Frequency, and Attention Features for Robust Deepfake Detection in Low-Resolution Images (abstract)
10:50
Dense Attention-Enhanced U-Net for Complex Image Segmentation Tasks (abstract)
11:10
Botox Detection and Face Analytics Using Deep Learning (abstract)
11:30
Quantifying Modality Contributions in Vision-Language Models via Partial Information Decomposition (abstract)
11:50
Satellite Image Analysis Using Modified EfficientNet (abstract)
10:30-12:00 Session 12B: Explainable, Fair, and Trustworthy AI 2

Explainable, Fair, and Trustworthy AI 2

Location: Ballroom B
10:30
Addressing a bias in Evaluating of Student Explanations of Worked Programming Examples (abstract)
10:50
ProtoPVAE: Improving Prototype Consistency and Stability with Regularized Latent Spaces (abstract)
11:10
Training Ethical Language Models via Reinforcement Learning from AI Feedback (abstract)
11:30
Advancing Fairness and Explainability in AI for Autism Diagnosis (abstract)
11:50
Explainable Hierarchical Graph Neural Networks for Structured Decision Modeling (abstract)
10:30-12:00 Session 12C: Main Track 7

Main Track 7

Location: Ballroom C
10:30
Probing Knowledge Graph Reliability and Semantic Coherence with Language Models (abstract)
PRESENTER: Yoonhyuck Woo
10:50
Tiny KANs: A Performance Benchmark of Kolmogorov-Arnold Networks on Microcontrollers (abstract)
11:10
Comparing EPGP Surrogates and Finite Elements Under Degree-of-Freedom Parity (abstract)
11:30
TOML: Transistor Operations for Machine Learning - A Physics-Grounded Energy Efficiency Framework (abstract)
11:50
Deep Contrastive Representations for Neural-Congruency Modeling in EEG Studies of Reading Disorders (abstract)
10:30-12:00 Session 12D: Main Track 8

Main Track 8

Location: Heron
10:30
A Fairness-Aware Semi-Supervised Clustering Method (abstract)
PRESENTER: Cristina Maier
10:50
Towards Fair Pay and Equal Work: Imposing View Time Limits in Crowdsourced Image Classification (abstract)
11:10
Scope Aware Contractor Performance Prediction Using Machine Learning and Work Package Vector Similarity (abstract)
11:30
Beyond Coefficients: Forecast-Necessity Testing for Interpretable Causal Discovery in Nonlinear Time-Series Models (abstract)
11:50
Towards a Cross-Participant Cognitive Load Classification Using Eye Tracking and Deep Learning (abstract)