Dr. Seema Purohit (Professor Emeritus, Birla College of Arts, Science and Commerce, India)
Software for Digital Scholarship - JAMOVI
ABSTRACT. Jamovi is a free, open-source, standalone software that offers a point-and-click interface for R. It offers the power of R, including advanced analyses such as mixed models and factor analysis, with an interface elements that SPSS-users will recognize. Functionality can be replicated within R through the jmv package.
On one hand Jamovi offers the advanced analyses that experienced researchers need and on the other hand students can learn applied statistics efficiently through Jamovi.
The data, transformations, and interactive output are all in a single file
The output is interactive, updating on-the-fly as the variables are added and options set, and as data is filtered or changed.
Variable transformations (ex. compute and recode) are saved as rules to review and modify as needed
Also it can help you learn R, since you can see how the syntax changes as you check boxes and add elements.
ABSTRACT. Designing and Demonstrating LLM-Based Autonomous Agents: A Practical Approach "This tutorial session, aimed at a diverse range of participants including AI researchers, developers, industry professionals, and academics, is meticulously structured to provide a comprehensive understanding of designing Large Language Model (LLM) based autonomous agents. We will embark on a journey from the theoretical underpinnings to real-world applications, culminating in a dynamic live demonstration. Our initial focus will be on elucidating the core principles behind the design of LLM-based autonomous agents. We aim to demystify the complex architecture of these models, discussing their integration with various AI systems, and delving into the nuances of training and optimization for practical deployment. Subsequently, the tutorial will present an array of use cases across diverse sectors such as customer service, healthcare, and finance. These examples will not only showcase the versatility of LLM-based agents but also illuminate their transformative potential in automating and solving complex tasks. The pinnacle of the session will be a live demonstration of a fully functioning autonomous agent. This hands-on showcase will provide attendees with an experiential understanding of the operational capabilities of these agents, highlighting their responsiveness and adaptability in a real-world setting. This tutorial is crafted to cater to both novices and veterans in the field of LLMs. It promises to equip all attendees with the necessary knowledge and skills to design, implement, and appreciate the intricate workings of LLM-based autonomous agents, fostering a deeper engagement with this cutting-edge technology.
Ramesh Srinivasan (Head AWS Data and Analytics- AWS BU ,TCS ,Banglore, India)
Navigating the Data Landscape with Data Mesh and AWS DataZone
ABSTRACT. Introduction to Data Mesh
In the realm of expansive data, establishments encounter a perpetual obstacle: the effective administration and extraction of insights from the extensive quantities of data accessible to them. Data Mesh materializes as a transformative change in the manner in which we handle data within an establishment. Termed by Zhamak Dehghani in 2019, Data Mesh embodies a deviation from conventional centralized data architectures towards a more decentralized and domain-centric approach.
Unraveling the History of Data Mesh
The origins of Data Mesh can be traced back to the growing intricacy and difficulties linked with monolithic data architectures. Conventional centralized methodologies frequently lead to obstructions, wherein the management, control, and exploration of data become burdensome. Data Mesh, as a notion, endeavours to tackle these challenges by dispersing data responsibilities and harmonizing them with business units that are specific to particular domains.
Benefits of Data Mesh for Organizations
Implementing Data Mesh offers numerous advantages for organizations. To begin with, it cultivates a climate of data ownership, wherein individual domain teams assume accountability for their data products. This results in enhanced agility and expedited decision-making procedures. Moreover, Data Mesh facilitates scalability, enabling organizations to manage increasing data volumes while maintaining optimal performance. The framework further amplifies data quality and fosters cross-functional cooperation, thereby establishing a more efficient and inventive data ecosystem.
Architecture Patterns for Implementing Data Mesh
The architecture of a Data Mesh is guided by a number of fundamental principles. These encompass the concept of domain-oriented decentralized data ownership, the perception of data as a product, and the establishment of federated computational governance. In order to effectively put these principles into practice, organizations often employ a combination of event-driven architecture, microservices, and self-serve data infrastructure. Through the dismantling of data silos and the empowerment of domain teams, Data Mesh architecture patterns lay the groundwork for a more efficient and scalable data environment.
Introducing AWS DataZone
Enter AWS DataZone, a comprehensive solution from Amazon Web Services designed to complement the principles of Data Mesh. AWS DataZone is a data storage and management service that allows organizations to seamlessly store, discover, and access their data across various AWS regions. It offers features such as data residency, low-latency access, and compliance with regional regulations, making it a suitable candidate for implementing Data Mesh on the AWS cloud.
Implementing Data Mesh with AWS DataZone
Leveraging AWS DataZone within a Data Mesh framework involves aligning domain teams with dedicated data zones, allowing them to store and manage their data independently. The decentralized nature of AWS DataZone complements the Data Mesh philosophy by providing domain teams with the autonomy to control their data while benefiting from the scalability and reliability of AWS infrastructure.
Conclusion
In conclusion, the evolution of data management practices has given rise to the Data Mesh concept, offering organizations a more efficient and scalable approach to handling data. The adoption of Data Mesh, coupled with the capabilities of AWS DataZone, empowers organizations to break free from traditional centralized models and embrace a decentralized, domain-oriented data architecture. By doing so, they can unlock the full potential of their data, driving innovation and agility in an increasingly data-driven world.
: Empowering Patient-Centered Care: Innovating Healthcare Diagnostics with AI and Data Insights
ABSTRACT. In an era where precision medicine is becoming the cornerstone of healthcare, integrating Artificial Intelligence (AI) with real-world data insights offers unprecedented opportunities for personalised treatment strategies. The proposed session aims to demystify the process of leveraging AI to analyse and interpret complex healthcare datasets.
During this, we will explore the fundamental principles of personalised medicine and the pivotal role of AI in deciphering patient-specific data patterns and diagnostics. We delve into various AI methodologies, particularly machine learning algorithms and natural language processing, which are instrumental in extracting actionable insights from diverse data types, including electronic health records and genomic information.
Moreover, the session will address crucial ethical considerations and data privacy laws relevant to using patient data in AI. We discuss the importance of ethical AI development, focusing on the challenges and responsibilities of maintaining patient confidentiality and ensuring unbiased algorithmic outcomes.
By the end of the session, we will conclude on the rapidly evolving field of AI in healthcare, particularly in developing personalised treatment strategies based on real-world data insights.
Quantum Machine Learning Applications for Quantum Data sets
ABSTRACT. With the advent of digital era applications and extensive use of Artificial Intelligence, Quantum Machine Learning applications have become essential to design real use case applications with extensive number crunching capabilities. Such applications require very high processing power and cannot be designed using classical hardware due to limitations in processing speed. Further these devices have a minimal number of processing states it would be difficult to handle the data intensive computations. To combat the above we need quantum computing hardware which has a feature of superposition and entanglement which would facilitate processing at a rapid rate and provide reliable inferences. It is essential to convert the existing data sets to quantum data sets and design quantum machine learning algorithms using quantum gate circuits to handle valid inferences.
ABSTRACT. The diversity and dynamic nature of information provide significant challenges in its collecting and management in the today's times of big data. The goal of this session is to dive deep into the complicated world of handling heterogeneous and temporal data, addressing complexity and providing efficient solutions.
In this session we will go over some of the IOT based systems and data collection and management using them. We will also go over the ways to utilize these systems effectively. We will discuss the significance of data quantity and quality ratios in maximizing the capabilities of your system.
This program will provide you with an apt toolset whether you are an IoT enthusiast, a seasoned professional, or an aspiring engineer. Equip yourself with the knowledge and methods required for IoT data gathering and strengthen management processes, these are stepping stones which push innovation in the ever-changing IoT world.