Workshop
Global South AI
Susanna Raj · Pariya Sarin · Sudha Jamthe
Room 211 - 213
Global South in AI has the mission to add inclusion to Language AI. They focus on training new researchers from Global South Languages and Countries to present posters (peer reviewed selection) and bring them to NeurIPS to collaborate.
Schedule
Mon 9:00 a.m. - 9:30 a.m.
|
Global South in AI - Workshop - Session 1 - Inclusion in GenAI/LLM from Global South
(
in-person session
)
>
SlidesLive Video Sudha Jamthe and Susanna Raj will do the opening session of the Global South in AI Workshops. In this session you will get a walk through of the State of LLM in Global South Languages. This is an in person session and will be streamed via NeurIPS channels for Workshops. |
🔗 |
Mon 9:30 a.m. - 10:00 a.m.
|
Global South in AI - Session 3- Distrubuted Cloud for GenAI Tech Stack
(
in-person session
)
>
SlidesLive Video Roxy Stimpson, VP Innovation of F5 Inc, an American technology company specialized in security will do a keynote anout how to build a distrubuted Cloud Technology Stack for Generative AI. |
Roxy Stimpson 🔗 |
Mon 10:00 a.m. - 10:20 a.m.
|
Global South in AI - Workshop - Session 3 - DEIGPT: Adding Diversity and Inclusiong Using LLM
(
in-person session
)
>
SlidesLive Video Kanene Ayo Holder will present DEIGPT about how to leverage LLM to bring diversity and Inclusion in companies. Kanene Ayo Holder is a renowned AI ethics expert and diversity consultant. With three National Endowment for the Humanities awards and an extensive background in education and interactive theater, she transforms learning experiences for organizations, schools, and nonprofits. Her work has been awarded with a Colin Powell Fellowship for Policy Study and numerous published contributions and speaking engagements including at Columbia University and SXSW on racism and the future of work. Kanene is also a certified diversity trainer for clients including the American Red Cross and corporations including MAGNA Global. As the AI Integrations Manager for DEIGPT, an ethical and inclusive AI model, Kanene empowers individuals and companies to develop effective, responsible AI strategies. She has delivered keynotes at industry-leading events, including Carat's Innovation Summit, and is set to present on ethical and inclusive AI at TedX Harlem and ADWEEK this fall. |
Kanene Holder 🔗 |
Mon 10:20 a.m. - 10:40 a.m.
|
Global South in AI - Workshop - Session 4 - Explainability and Machine UnLearning in LLM
(
in-person session
)
>
SlidesLive Video Yashaswani Viswanath, Doctoral Researcher focused on the cutting edge topic of Machine "Un" Learning will do a workshop about Explainability and Machine Unlearning about how to get LLM models to forget its bias in slicing or sharding data already learnt in its memory. Yashaswini will present the latest research in this topic and the challenges and opportunities to advance this field. |
Yashaswini Viswanath 🔗 |
Mon 10:40 a.m. - 11:00 a.m.
|
Global South in AI - Workshop - Session 5 - Global in Global Summit Learning in NeurIPS2023 Call about GenAI in Global South
(
Zoom presentation
)
>
SlidesLive Video Program Chair Pariya Sarin will share the learnings from this year's call and accepted posters covering 24 posters about the state of Generative AI Inclusion in Global South and what are the use cases, research, low resource language datasets in Global South Langauges. This presentation will highlight the reality of Generative AI in Global South Languages and show some learnings on what is happening in Global South GenAI Research Communities and how we can bring inclusion in Language and Diffusion models. |
Pariya Sarin 🔗 |
Mon 11:00 a.m. - 11:20 a.m.
|
Global South in AI - Workshop - Session 6 - Girls in Quantum Computing
(
Zoom presentation
)
>
SlidesLive Video Elisa Torres Durney is the 18yr old founder of Girls in Quantum COmputing and a Top LinkedIn voice in Quantum Computing. She has founded Girls in Quantum Computing as a Non Profit and a network of girls and students to learn about the field of #quantumcomputing with Students from +21 countries. Elisa Torres will present an intro to Quantum Computing and provide us an insight to how we can connect us to LLM. This session will be a live Zoom session of Elisa Torres Durney interviewed by Pariya Sarin, our program chair, also a college student and 18yrs old. |
Elisa Torres Durney 🔗 |
Mon 11:20 a.m. - 11:40 a.m.
|
Global South in AI - Session 7 - Climate Change and LLM
(
in-person session
)
>
SlidesLive Video John C Havens of IEEE will talk about the important topic of Climate Change and impact of LLMs and what we can do about it as a Research community. |
John Havens 🔗 |
Mon 11:40 a.m. - 12:00 p.m.
|
Global South in AI: Author Presentations: GenAI In Global South In AI select posters showcase
(
in-person session
)
>
This final segment of the Global South in AI workshop will showcase a select set of the 24 accepted posters of this year showcasing Generative AI in Global South Languages from around the world. This will be a combination of in-person and pre-recorded/livestreamed authors. Please join this session if you are interested in understanding the state of Generative AI in Global South Countries from Africa, Latin America and Asia and to understand the use cases, research in low research language datasets and in mitigating bias in Global South Languages when doing machine translation to English. |
🔗 |
-
|
Cross-Lingual Speech-to-Speech Translation: A Generative AI Approach for Smooth Code Switching between Tamil and Dravidian Languages
(
Poster
)
>
link
Cross-lingual speech-to-speech translation from Tamil to other Dravidian languages is a critical undertaking that necessitates the use of modern natural language processing (NLP) techniques. This research describes a revolutionary generative AI-based approach for smooth code-switching between Tamil and Dravidian Languages. To generate high-quality translations, the proposed system employs an encoder-decoder architecture and an attention mechanism. The system is also trained on a vast dataset of parallel sentences in Tamil and other Indian languages, allowing it to grasp the intricacies of each language and create correct translations.Compared to traditional statistical machine translation approaches, the suggested system has significant advantages. For starters, it can manage multi-language code-switching, allowing users to transition between Tamil and other Indian languages without losing context. Second, even when working with complicated sentence structures and idiomatic idioms, it delivers fluent and cross-lingual natural-sounding translations. Finally, the system is extremely scalable and easily integrated into a variety of business-to-business situations, allowing for effective communication across linguistic boundaries.Overall, the proposed system offers a substantial advancement in cross-lingual speech-to-speech translation technology for low-resource Dravidian Languages, with potential applications in customer service, e-commerce, and education. Because of its capacity to handle code-switching and generate high-quality translations, it is a suitable tool for enterprises that operate in multilingual environments. |
VENKATESAN NATESAN · Shunmuga Priya MC · Arulanand Natarajan 🔗 |
-
|
Cross-Lingual Speech-to-Speech Translation: A Generative AI Approach for Smooth Code Switching between Tamil and Dravidian Languages
(
Oral
)
>
link
Cross-lingual speech-to-speech translation from Tamil to other Dravidian languages is a critical undertaking that necessitates the use of modern natural language processing (NLP) techniques. This research describes a revolutionary generative AI-based approach for smooth code-switching between Tamil and Dravidian Languages. To generate high-quality translations, the proposed system employs an encoder-decoder architecture and an attention mechanism. The system is also trained on a vast dataset of parallel sentences in Tamil and other Indian languages, allowing it to grasp the intricacies of each language and create correct translations.Compared to traditional statistical machine translation approaches, the suggested system has significant advantages. For starters, it can manage multi-language code-switching, allowing users to transition between Tamil and other Indian languages without losing context. Second, even when working with complicated sentence structures and idiomatic idioms, it delivers fluent and cross-lingual natural-sounding translations. Finally, the system is extremely scalable and easily integrated into a variety of business-to-business situations, allowing for effective communication across linguistic boundaries.Overall, the proposed system offers a substantial advancement in cross-lingual speech-to-speech translation technology for low-resource Dravidian Languages, with potential applications in customer service, e-commerce, and education. Because of its capacity to handle code-switching and generate high-quality translations, it is a suitable tool for enterprises that operate in multilingual environments. |
VENKATESAN NATESAN · Shunmuga Priya MC · Arulanand Natarajan 🔗 |
-
|
LLM based Machine Teacher for Kannada Language
(
Poster
)
>
link
In the realm of education, the assessment of exam papers is a pivotal component, involving the formulation of well-structured questions and the subsequent evaluation of student responses. This process is laden with challenges, consuming extensive time, resources, and manpower and expertise in Kannada Language. Moreover, the human-driven evaluation is susceptible to biases influenced by the evaluator's circumstances and context and hold on languageIn light of these challenges, there emerges a transformative solution: a proficiently trained Exam Paper Evaluation Module. This module harnesses the power of Large Language Models for Kannada Language to comprehend questions and responses, thereby revolutionizing the conventional evaluation process. By extracting pertinent features from the answers, this module learns to assess the content and quality of student replies, thus automating and expediting the evaluation process significantly.Here we try to address the lack of good quality teachers of Kannada language using GenAI and LLMS.The advantages are manifold. Not only does this innovation save time and resources, but it also mitigates the biases inherent like Grader, Cultural, Stereotype, Confirmation, Halo Effect, Leniency or Severity, Examiner Fatigue, Recency, Confirmation of Expectations, Subject Knowledge Biass in human grading like. This transformative model is versatile, capable of adapting to diverse Indian languages, subjects, grading systems, and even distinct universities. Its potential impact on education is profound, heralding a new era of efficiency and fairness in assessment procedures.By leveraging Generative AI, this Exam Paper Evaluation Module heralds a future where students' performances are assessed objectively, swiftly, and consistently. As education transcends geographical and linguistic boundaries, this model stands as a beacon of advancement, ensuring that evaluation remains equitable and unbiased across various educational contextsIn our effort to mitigate linguistic biases, we take steps such as diversifying our data, especially by including underrepresented languages like Kannada. We employ bias mitigation strategies and train the model to better comprehend language nuances and sensitivities. Additionally, we work on making the model's decision-making process more transparent and interpret able. We also maintain a system of continuous feedback to enhance the model's learning. Furthermore, the responsibility for ensuring fairness and inclusivity lies not only with the developers but also with the human designers, reviewers, and institutions using these models. Together, we collaborate to construct AI models that prioritize diversity and minimize biases in grading exam applications. |
Ramesh Thippeswamy · Sneha Thippeswamy 🔗 |
-
|
LLM based Machine Teacher for Kannada Language
(
Oral
)
>
link
In the realm of education, the assessment of exam papers is a pivotal component, involving the formulation of well-structured questions and the subsequent evaluation of student responses. This process is laden with challenges, consuming extensive time, resources, and manpower and expertise in Kannada Language. Moreover, the human-driven evaluation is susceptible to biases influenced by the evaluator's circumstances and context and hold on languageIn light of these challenges, there emerges a transformative solution: a proficiently trained Exam Paper Evaluation Module. This module harnesses the power of Large Language Models for Kannada Language to comprehend questions and responses, thereby revolutionizing the conventional evaluation process. By extracting pertinent features from the answers, this module learns to assess the content and quality of student replies, thus automating and expediting the evaluation process significantly.Here we try to address the lack of good quality teachers of Kannada language using GenAI and LLMS.The advantages are manifold. Not only does this innovation save time and resources, but it also mitigates the biases inherent like Grader, Cultural, Stereotype, Confirmation, Halo Effect, Leniency or Severity, Examiner Fatigue, Recency, Confirmation of Expectations, Subject Knowledge Biass in human grading like. This transformative model is versatile, capable of adapting to diverse Indian languages, subjects, grading systems, and even distinct universities. Its potential impact on education is profound, heralding a new era of efficiency and fairness in assessment procedures.By leveraging Generative AI, this Exam Paper Evaluation Module heralds a future where students' performances are assessed objectively, swiftly, and consistently. As education transcends geographical and linguistic boundaries, this model stands as a beacon of advancement, ensuring that evaluation remains equitable and unbiased across various educational contextsIn our effort to mitigate linguistic biases, we take steps such as diversifying our data, especially by including underrepresented languages like Kannada. We employ bias mitigation strategies and train the model to better comprehend language nuances and sensitivities. Additionally, we work on making the model's decision-making process more transparent and interpret able. We also maintain a system of continuous feedback to enhance the model's learning. Furthermore, the responsibility for ensuring fairness and inclusivity lies not only with the developers but also with the human designers, reviewers, and institutions using these models. Together, we collaborate to construct AI models that prioritize diversity and minimize biases in grading exam applications. |
Ramesh Thippeswamy · Sneha Thippeswamy 🔗 |
-
|
Generative AI’s Role in Dialect Preservation in the Global South
(
Poster
)
>
link
In the face of modern challenges to linguistic diversity such as language bias, linguistic tokenization, translation quality, dominance of major languages, preservation vs. modernization, and more, preserving unique dialects in the Global South has gained steady importance for preserving cultural heritage. This study explores how Generative Artificial Intelligence (AI) can contribute to revitalizing endangered dialects. Focusing on the Global South, where dialects hold cultural significance, the study will examine how AI techniques such as natural language processing and speech synthesis—encompassing machine translation, sentiment analysis, and acoustic modeling—can document and assist with the documentation of dialectal variations. The research addresses ethical concerns about AI's impact on authenticity and the risk of homogenization. By leveraging AI's capabilities, the study aims to empower communities in dialect preservation and develop tools for language learners to access and pass down dialectal knowledge. Incorporating linguistics, AI ethics, and cultural studies, the research highlights how AI can support dialect diversity and cultural heritage in the Global South through practical applications. In harnessing AI to support dialect preservation, this study places specific emphasis on ensuring that these technological advancements are wielded responsibly, with a steadfast commitment to enhancing cultural heritage rather than inadvertently replacing its irreplaceable meaning. |
Maira Elahi 🔗 |
-
|
Generative AI’s Role in Dialect Preservation in the Global South
(
Oral
)
>
link
In the face of modern challenges to linguistic diversity such as language bias, linguistic tokenization, translation quality, dominance of major languages, preservation vs. modernization, and more, preserving unique dialects in the Global South has gained steady importance for preserving cultural heritage. This study explores how Generative Artificial Intelligence (AI) can contribute to revitalizing endangered dialects. Focusing on the Global South, where dialects hold cultural significance, the study will examine how AI techniques such as natural language processing and speech synthesis—encompassing machine translation, sentiment analysis, and acoustic modeling—can document and assist with the documentation of dialectal variations. The research addresses ethical concerns about AI's impact on authenticity and the risk of homogenization. By leveraging AI's capabilities, the study aims to empower communities in dialect preservation and develop tools for language learners to access and pass down dialectal knowledge. Incorporating linguistics, AI ethics, and cultural studies, the research highlights how AI can support dialect diversity and cultural heritage in the Global South through practical applications. In harnessing AI to support dialect preservation, this study places specific emphasis on ensuring that these technological advancements are wielded responsibly, with a steadfast commitment to enhancing cultural heritage rather than inadvertently replacing its irreplaceable meaning. |
Maira Elahi 🔗 |
-
|
Brush for the blind: Ecosystem for visually blind to create artworks for monetary gain
(
Poster
)
>
link
Generative AI is good with generating artworks using simple prompts on one word length. This makes it a good candidate for someone who is visually impaired to make a prompt using voice and this voice can be converted and fed as text to Stable Diffusion to get images. Yes, more complicated prompts are possible but here we are lookin gat someone who has not got a chnace to see the world. There is one more school of thought that says those who are visually impaired have a very good imaginative power. In these cases we can employ AI to generate images which can put up for sale as a downloadbale entity on internet or hard copy at their school. They can sell these for fund collections for school welfare. This opens up avenues for the Visually Challenged. Yes, they camnt see what they create, but it is a small aid in forming avenues of donation for the blind school. This needs a website where the artworks can be displyed for download. But most importantly it requires a software translator which can understand the native mother toungue (a Global South language) and convert the prompt to English. Secondly it need to have a braille interface to speak to the Visually chanllenged participant if they are having hearing impairment. At the outset it looks very simple, type a prompt, get an art, sell it but there are a lot more challenges involved if the visually challenged wants to make more complicated versions or art with detailed prompts. Also there raises a possibility if the creatoris wanting to see the created art. A 3D pin device which using haptic technology be used to make the art alive in 3D so that they can touch and feel.NeurIPS is a global stage and this abstract can be considered successful if people come together for a better tomorrow and give AI brushes for the blind |
Yashaswini Viswanath · Pavitra T · Dr Meenakshi S 🔗 |
-
|
Brush for the blind: Ecosystem for visually blind to create artworks for monetary gain
(
Oral
)
>
link
Generative AI is good with generating artworks using simple prompts on one word length. This makes it a good candidate for someone who is visually impaired to make a prompt using voice and this voice can be converted and fed as text to Stable Diffusion to get images. Yes, more complicated prompts are possible but here we are lookin gat someone who has not got a chnace to see the world. There is one more school of thought that says those who are visually impaired have a very good imaginative power. In these cases we can employ AI to generate images which can put up for sale as a downloadbale entity on internet or hard copy at their school. They can sell these for fund collections for school welfare. This opens up avenues for the Visually Challenged. Yes, they camnt see what they create, but it is a small aid in forming avenues of donation for the blind school. This needs a website where the artworks can be displyed for download. But most importantly it requires a software translator which can understand the native mother toungue (a Global South language) and convert the prompt to English. Secondly it need to have a braille interface to speak to the Visually chanllenged participant if they are having hearing impairment. At the outset it looks very simple, type a prompt, get an art, sell it but there are a lot more challenges involved if the visually challenged wants to make more complicated versions or art with detailed prompts. Also there raises a possibility if the creatoris wanting to see the created art. A 3D pin device which using haptic technology be used to make the art alive in 3D so that they can touch and feel.NeurIPS is a global stage and this abstract can be considered successful if people come together for a better tomorrow and give AI brushes for the blind |
Yashaswini Viswanath · Pavitra T · Dr Meenakshi S 🔗 |
-
|
Uncovering the Potential of Small Language Models
(
Poster
)
>
link
Large language models (LLMs) have revolutionized the field of artificial intelligence (AI), showcasing remarkable capabilities across various domains, including generating creative text and solving mathematical problems. Nevertheless, their enormous data and computational requirements have led to steep development costs. This constraint has resulted in the exclusion of many languages commonly spoken in the developing regions of the world, as the compute-, bandwidth- and labour-intensive task of developing appropriate training datasets poses limitations for researchers based in those regions. This constraint also impedes research progress in tackling the technical, ethical, and legal challenges that may arise, and that are likely to disproportionately affect those regions. In recent studies, researchers working on the English language utilized GPT-3.5 and GPT-4 to construct a small synthetic dataset of short stories consisting of vocabulary familiar to 3 to 4-year-olds. This dataset was used to train small language models (SLMs) that are orders of magnitude smaller than LLMs. Despite their reduced complexity, these SLMs produced coherent stories with diverse content spanning multiple paragraphs exhibiting almost perfect grammar, and delivered advantages beyond the simplification of training data generation. Drawing inspiration from these achievements, and recognizing their potential in addressing the digital language divide, we propose to investigate whether SLMs can be equally effective with other languages and within resource constraints faced by researchers in developing regions. |
Christine Mwase 🔗 |
-
|
Uncovering the Potential of Small Language Models
(
Oral
)
>
link
Large language models (LLMs) have revolutionized the field of artificial intelligence (AI), showcasing remarkable capabilities across various domains, including generating creative text and solving mathematical problems. Nevertheless, their enormous data and computational requirements have led to steep development costs. This constraint has resulted in the exclusion of many languages commonly spoken in the developing regions of the world, as the compute-, bandwidth- and labour-intensive task of developing appropriate training datasets poses limitations for researchers based in those regions. This constraint also impedes research progress in tackling the technical, ethical, and legal challenges that may arise, and that are likely to disproportionately affect those regions. In recent studies, researchers working on the English language utilized GPT-3.5 and GPT-4 to construct a small synthetic dataset of short stories consisting of vocabulary familiar to 3 to 4-year-olds. This dataset was used to train small language models (SLMs) that are orders of magnitude smaller than LLMs. Despite their reduced complexity, these SLMs produced coherent stories with diverse content spanning multiple paragraphs exhibiting almost perfect grammar, and delivered advantages beyond the simplification of training data generation. Drawing inspiration from these achievements, and recognizing their potential in addressing the digital language divide, we propose to investigate whether SLMs can be equally effective with other languages and within resource constraints faced by researchers in developing regions. |
Christine Mwase 🔗 |
-
|
Empowering NLP for African Low-Resource Languages: Leveraging Llama-2 Model for Swahili and Kenyan Dialects
(
Poster
)
>
link
This research is centered on the enhancement of language modeling tailored for African low-resource languages, employing the recently introduced Llama-2 model by META. The primary objective is to address the existing challenges within natural language processing (NLP) for Swahili and other underutilized Kenyan dialects. In the context of contemporary neural network-based language modeling, the demand for data-rich representations has notably escalated. However, the paucity of linguistic data pertinent to low-resource languages, such as Swahili, has precipitated intricacies in the modeling process. This investigation responds to this exigency by harnessing advanced datasets and linguistic reservoirs to rectify this imbalance. The study introduces an unannotated Swahili dataset, meticulously procured through comprehensive preprocessing of raw data, alongside the incorporation of a Swahili syllabic alphabet and a dedicated dataset designed for Swahili word analogy. These contributions not only bolster the efficacy of language modeling but also extend their utility to downstream NLP tasks encompassing part-of-speech tagging, sentiment analysis, and machine translation. Hence, this study underscores the practical import of precise language modeling for languages facing resource constraints. It achieves this by not only showcasing the development of speech-to-text and question-answering systems, thereby charting new trajectories for NLP applications in Swahili, but also accentuating the potential transformative influence of these resources on digital inclusivity, information proliferation, and the emergence of innovative NLP methodologies tailor-made for underprivileged African languages. |
Rancy Chepchirchir 🔗 |
-
|
Empowering NLP for African Low-Resource Languages: Leveraging Llama-2 Model for Swahili and Kenyan Dialects
(
Oral
)
>
link
This research is centered on the enhancement of language modeling tailored for African low-resource languages, employing the recently introduced Llama-2 model by META. The primary objective is to address the existing challenges within natural language processing (NLP) for Swahili and other underutilized Kenyan dialects. In the context of contemporary neural network-based language modeling, the demand for data-rich representations has notably escalated. However, the paucity of linguistic data pertinent to low-resource languages, such as Swahili, has precipitated intricacies in the modeling process. This investigation responds to this exigency by harnessing advanced datasets and linguistic reservoirs to rectify this imbalance. The study introduces an unannotated Swahili dataset, meticulously procured through comprehensive preprocessing of raw data, alongside the incorporation of a Swahili syllabic alphabet and a dedicated dataset designed for Swahili word analogy. These contributions not only bolster the efficacy of language modeling but also extend their utility to downstream NLP tasks encompassing part-of-speech tagging, sentiment analysis, and machine translation. Hence, this study underscores the practical import of precise language modeling for languages facing resource constraints. It achieves this by not only showcasing the development of speech-to-text and question-answering systems, thereby charting new trajectories for NLP applications in Swahili, but also accentuating the potential transformative influence of these resources on digital inclusivity, information proliferation, and the emergence of innovative NLP methodologies tailor-made for underprivileged African languages. |
Rancy Chepchirchir 🔗 |
-
|
Generative AI for Literacy in Mali
(
Poster
)
>
link
Mali is a former French colony in West Africa with 65% illiteracy and extremely poor results in all levels of its educational system. This has been partly attributed to its colonial heritage where children are obligated to study in French despite the fact that French is not spoken in the home, its use largely restricted to the administrative domain. The vehicular language of Mali, Bambara, is spoken by about 80% of the population but efforts to transition to education in the language the people speak has been hampered by lack of curriculum materials and, generally, development of Bambara as a written, as opposed to oral, language. The project explores the use of generative AI to produce illustrated stories in Bambara for children rooted in Malian culture, along with supporting pedagogical material for students and lesson plans for teachers. While the project demonstrated that generative AI tools generate material with an extremely Global North-centric bias, human experts in the use of the tools with knowledge of Malian culture can use the tools with great productivity. In a matter of weeks, several times the total quantity of children’s literature available in Bambara that had existed before was generated. Field testing demonstrated remarkable success in interesting children that had never read in their native language to read, with a majority of children already acquainted with French phonetics learning to read Bambara accurately in a single story-based lesson and children who had not learned reading skills motivated and making significant progress due to the material being in their native language and lavishly illustrated. The approach, combined with an emphasis on native-language education, appears to be very promising for reducing the level of illiteracy and low-literacy in Mali. |
Michael Leventhal · Allahsera Auguste Tapo · Christopher Homan 🔗 |
-
|
Generative AI for Literacy in Mali
(
Oral
)
>
link
Mali is a former French colony in West Africa with 65% illiteracy and extremely poor results in all levels of its educational system. This has been partly attributed to its colonial heritage where children are obligated to study in French despite the fact that French is not spoken in the home, its use largely restricted to the administrative domain. The vehicular language of Mali, Bambara, is spoken by about 80% of the population but efforts to transition to education in the language the people speak has been hampered by lack of curriculum materials and, generally, development of Bambara as a written, as opposed to oral, language. The project explores the use of generative AI to produce illustrated stories in Bambara for children rooted in Malian culture, along with supporting pedagogical material for students and lesson plans for teachers. While the project demonstrated that generative AI tools generate material with an extremely Global North-centric bias, human experts in the use of the tools with knowledge of Malian culture can use the tools with great productivity. In a matter of weeks, several times the total quantity of children’s literature available in Bambara that had existed before was generated. Field testing demonstrated remarkable success in interesting children that had never read in their native language to read, with a majority of children already acquainted with French phonetics learning to read Bambara accurately in a single story-based lesson and children who had not learned reading skills motivated and making significant progress due to the material being in their native language and lavishly illustrated. The approach, combined with an emphasis on native-language education, appears to be very promising for reducing the level of illiteracy and low-literacy in Mali. |
Michael Leventhal · Allahsera Auguste Tapo · Christopher Homan 🔗 |
-
|
The Effect of Generative AI on Telugu
(
Poster
)
>
link
The intersection of Telugu language and generative AI presents both promise and challenge. In Telugu, generative AI offers automation of content creation and translation, streamlining communication and cultural preservation. However, accuracy and cultural nuances remain problematic. The translations often lack depth, struggling with idiomatic expressions and regional variations, leading to potential misinterpretations. To enhance its utility, generative AI must prioritize accuracy and cultural sensitivity. Addressing these concerns necessitates a more profound understanding of Telugu's linguistic diversity, encompassing dialects and social contexts. Incorporating local knowledge and traditions into training data can significantly augment performance. This call for feedback offers an opportunity to bridge these gaps collaboratively, uniting AI developers and Telugu speakers to ensure generative AI's effective and respectful contribution to the richness of the Telugu language. |
Vanama Yaswanth 🔗 |
-
|
The Effect of Generative AI on Telugu
(
Oral
)
>
link
The intersection of Telugu language and generative AI presents both promise and challenge. In Telugu, generative AI offers automation of content creation and translation, streamlining communication and cultural preservation. However, accuracy and cultural nuances remain problematic. The translations often lack depth, struggling with idiomatic expressions and regional variations, leading to potential misinterpretations. To enhance its utility, generative AI must prioritize accuracy and cultural sensitivity. Addressing these concerns necessitates a more profound understanding of Telugu's linguistic diversity, encompassing dialects and social contexts. Incorporating local knowledge and traditions into training data can significantly augment performance. This call for feedback offers an opportunity to bridge these gaps collaboratively, uniting AI developers and Telugu speakers to ensure generative AI's effective and respectful contribution to the richness of the Telugu language. |
Vanama Yaswanth 🔗 |
-
|
Investigating Linguistic Biases in AI Detectors against Non-Native English Scholars from the Global South
(
Poster
)
>
link
As AI-generated text detectors (AI-GTDs) become more widely utilized within academia and research contexts, concerns have been raised over their unintended ramifications due to language biases inherent within these systems. This project seeks to explore these consequences in particular with respect to non-native English-speaking scholars – especially from the Global South, and suggest strategies for creating more inclusive language-related uses of AI.This study notes the difference between plagiarism checkers and AI-GTD. Also, there are plagiarism checkers that have been embedded with AI-GTD. So, I investigate how biased AI-GTD (including plagiarism checkers embedded with AI-GTDs) impact academic progress, research, and global knowledge gaps. It evaluates AI’s false negative flagging of materials written in non-native English language as a potential bottleneck to effective cross-regional communication and diverse expression.I contend that language-limited AI-GTDs limit original contributions - silencing the distinctive voices of non-native scholars in their fields, leading to limited publishing opportunities, funding prospects, and recognition for those whose linguistic norms do not adhere to Western-centric norms. Furthermore, this project stresses the additional disadvantage faced by non-native speakers from Global South regions who often already face complex regional issues necessitating different forms of expressions.To address these challenges, the project emphasizes the need for AI-GTD designers to prioritise linguistic diversity in system development. This involves including an array of linguistic styles and patterns into training datasets in order to increase detectors' abilities to recognize and accommodate diverse voices. By dismantling language barriers and encouraging inclusivity, scholars from diverse backgrounds can contribute to global academic progress unimpeded. |
Gabriel Udoh 🔗 |
-
|
Investigating Linguistic Biases in AI Detectors against Non-Native English Scholars from the Global South
(
Oral
)
>
link
As AI-generated text detectors (AI-GTDs) become more widely utilized within academia and research contexts, concerns have been raised over their unintended ramifications due to language biases inherent within these systems. This project seeks to explore these consequences in particular with respect to non-native English-speaking scholars – especially from the Global South, and suggest strategies for creating more inclusive language-related uses of AI.This study notes the difference between plagiarism checkers and AI-GTD. Also, there are plagiarism checkers that have been embedded with AI-GTD. So, I investigate how biased AI-GTD (including plagiarism checkers embedded with AI-GTDs) impact academic progress, research, and global knowledge gaps. It evaluates AI’s false negative flagging of materials written in non-native English language as a potential bottleneck to effective cross-regional communication and diverse expression.I contend that language-limited AI-GTDs limit original contributions - silencing the distinctive voices of non-native scholars in their fields, leading to limited publishing opportunities, funding prospects, and recognition for those whose linguistic norms do not adhere to Western-centric norms. Furthermore, this project stresses the additional disadvantage faced by non-native speakers from Global South regions who often already face complex regional issues necessitating different forms of expressions.To address these challenges, the project emphasizes the need for AI-GTD designers to prioritise linguistic diversity in system development. This involves including an array of linguistic styles and patterns into training datasets in order to increase detectors' abilities to recognize and accommodate diverse voices. By dismantling language barriers and encouraging inclusivity, scholars from diverse backgrounds can contribute to global academic progress unimpeded. |
Gabriel Udoh 🔗 |
-
|
Exploring Generative AI in Nigerian Mixed media Art
(
Poster
)
>
link
Through harnessing the capabilities of Generative AI platforms, such as "midjourney" and "deepdaze," this research seeks to transform the essence of traditional Nigerian seed art into captivating new manifestations. The primary objective of this artistic exploration is to amplify the significance of African seeds as cultural symbols and to illuminate the intricacies of African identities through a contemporary lens. The research proceeds by the assemblage of 16 mixed media Art works which were randomly selected from the researchers portfolio. These artworks were chosen randomly to serve as the foundational dataset for training two distinct AI models, accomplished through the utilization of the "Deepdaze" and "Midjourney" platforms over a two-month period. In conclusion, this research establishes that the outcomes achieved with "Midjourney" surpass those of "Deepdaze." However, it acknowledges that as an ongoing project, the AI model will be subjected to further training utilizing a broader spectrum of Nigerian seed art. his will enable the final products to authentically capture Nigerian identities in AI-generated artwork. |
Nefertiti N Emezue · Chris Chinenye Emezue 🔗 |
-
|
Exploring Generative AI in Nigerian Mixed media Art
(
Oral
)
>
link
Through harnessing the capabilities of Generative AI platforms, such as "midjourney" and "deepdaze," this research seeks to transform the essence of traditional Nigerian seed art into captivating new manifestations. The primary objective of this artistic exploration is to amplify the significance of African seeds as cultural symbols and to illuminate the intricacies of African identities through a contemporary lens. The research proceeds by the assemblage of 16 mixed media Art works which were randomly selected from the researchers portfolio. These artworks were chosen randomly to serve as the foundational dataset for training two distinct AI models, accomplished through the utilization of the "Deepdaze" and "Midjourney" platforms over a two-month period. In conclusion, this research establishes that the outcomes achieved with "Midjourney" surpass those of "Deepdaze." However, it acknowledges that as an ongoing project, the AI model will be subjected to further training utilizing a broader spectrum of Nigerian seed art. his will enable the final products to authentically capture Nigerian identities in AI-generated artwork. |
Nefertiti N Emezue · Chris Chinenye Emezue 🔗 |
-
|
Linguistic Colonialism in the Age of Large Language Models: A Need for Diverse and Inclusive Regional Language Considerations
(
Poster
)
>
link
LLMs are contributing to a growing issue of linguistic colonialism, emphasizing the need for socially responsible approaches to safeguard low-resource and regional languages. The current tendency to prioritize English-centric models, training data, and evaluation benchmark datasets poses a potential threat to language equality and the preservation of linguistic diversity. Large Language Models (LLMs), such as LLAMA2 and FALCON, are predominantly trained on English text. The Llama2 and FALCON models provides explicit information about the language composition of its training dataset, with English accounting for 89% and 100% of the data respectively. However, the technical report for GPT4 does not explicitly mention its training dataset's language(s). The evaluations and testing of these models are primarily conducted within an English environment, hence constraining their practical relevance to languages other than English. The benchmarks used in LLAMA 2 for commonsense reasoning, world knowledge, reading comprehension were in English. Similarly, the benchmarks used for GPT4 were in english including the Uniform Bar Exam, LSAT, and GRE. The LLAMA2 suggests that its applicability in other languages may be compromised, due to it being predominantly trained on English. On the other hand GPT 4, notes that, 24 out of 26 non-English languages exhibited better MMLU benchmarks (with auto translated benchmark datasets) against results obtained by GPT 3.5 in English without providing detailed information regarding the sample size used for evaluation. This study proposes a shift in perspective towards the development of language models and associated benchmark datasets that are designed with inclusive regional language considerations. The study wants to create separate MMLU (Massive Multitask Language Understanding) validation sets for nine languages: Italian, Spanish, French, German, Russian, Arabic, Bengali, Urdu, and Marathi. These are the nine languages that were looked at in the GPT4 study. In addition, the research proposes to build validation sets for six additional languages:Tamil, Bahasa, Hindi, Kannada, Gujarati, and Portuguese. Regional questions and considerations will be developed for every language instead of translation versions of the MMLU for English. |
Sundaraparipurnan Narayanan 🔗 |
-
|
Linguistic Colonialism in the Age of Large Language Models: A Need for Diverse and Inclusive Regional Language Considerations
(
Oral
)
>
link
LLMs are contributing to a growing issue of linguistic colonialism, emphasizing the need for socially responsible approaches to safeguard low-resource and regional languages. The current tendency to prioritize English-centric models, training data, and evaluation benchmark datasets poses a potential threat to language equality and the preservation of linguistic diversity. Large Language Models (LLMs), such as LLAMA2 and FALCON, are predominantly trained on English text. The Llama2 and FALCON models provides explicit information about the language composition of its training dataset, with English accounting for 89% and 100% of the data respectively. However, the technical report for GPT4 does not explicitly mention its training dataset's language(s). The evaluations and testing of these models are primarily conducted within an English environment, hence constraining their practical relevance to languages other than English. The benchmarks used in LLAMA 2 for commonsense reasoning, world knowledge, reading comprehension were in English. Similarly, the benchmarks used for GPT4 were in english including the Uniform Bar Exam, LSAT, and GRE. The LLAMA2 suggests that its applicability in other languages may be compromised, due to it being predominantly trained on English. On the other hand GPT 4, notes that, 24 out of 26 non-English languages exhibited better MMLU benchmarks (with auto translated benchmark datasets) against results obtained by GPT 3.5 in English without providing detailed information regarding the sample size used for evaluation. This study proposes a shift in perspective towards the development of language models and associated benchmark datasets that are designed with inclusive regional language considerations. The study wants to create separate MMLU (Massive Multitask Language Understanding) validation sets for nine languages: Italian, Spanish, French, German, Russian, Arabic, Bengali, Urdu, and Marathi. These are the nine languages that were looked at in the GPT4 study. In addition, the research proposes to build validation sets for six additional languages:Tamil, Bahasa, Hindi, Kannada, Gujarati, and Portuguese. Regional questions and considerations will be developed for every language instead of translation versions of the MMLU for English. |
Sundaraparipurnan Narayanan 🔗 |
-
|
Requirement for Machine Unlearning Techniques for Kannada Language
(
Poster
)
>
link
In a Traditional Machine Learning where machine are capable of learning is what the world has seen in the last few years. Current research area of Unlearning researchers are defining that machine should be capable of Unlearning too, whichwill help us make sure we get good accurate content in the model's learning. LLMs have been trained on Kannada data that is out in the open. These Generative AI models which spew Kannada content while the temperature is varied, is not trained on curated Kannada literature and content. This results in wrong learning of Kannada language and it's context and nuanaces which are based on local culture. The case of going wrong is high and just as second language learners are corrected by native speakers, we as native Kannada speakers have a prerogative to have access to info on what content the Generative AI model was trained on. The main strength we need to excercise is that of Machine Unlearning when the mistakes are found. The Machine Unlearning techniques that are being developed have catered to the North and predominently English. This abstract is about brining awarenes that we need research in the direction of Machine Unlearning in Kannada.Why does this needs to be showcased at NeurIPS?1. We are proposing language based unlearning2. We are bringing awareness to Global South languages |
Yashaswini Viswanath · Vishwanath Hulipalled · Vanama Yaswanth 🔗 |
-
|
Requirement for Machine Unlearning Techniques for Kannada Language
(
Oral
)
>
link
In a Traditional Machine Learning where machine are capable of learning is what the world has seen in the last few years. Current research area of Unlearning researchers are defining that machine should be capable of Unlearning too, whichwill help us make sure we get good accurate content in the model's learning. LLMs have been trained on Kannada data that is out in the open. These Generative AI models which spew Kannada content while the temperature is varied, is not trained on curated Kannada literature and content. This results in wrong learning of Kannada language and it's context and nuanaces which are based on local culture. The case of going wrong is high and just as second language learners are corrected by native speakers, we as native Kannada speakers have a prerogative to have access to info on what content the Generative AI model was trained on. The main strength we need to excercise is that of Machine Unlearning when the mistakes are found. The Machine Unlearning techniques that are being developed have catered to the North and predominently English. This abstract is about brining awarenes that we need research in the direction of Machine Unlearning in Kannada.Why does this needs to be showcased at NeurIPS?1. We are proposing language based unlearning2. We are bringing awareness to Global South languages |
Yashaswini Viswanath · Vishwanath Hulipalled · Vanama Yaswanth 🔗 |
-
|
A Case Study of Representational Harm of South Sudanese Girls & Women
(
Poster
)
>
link
A Case Study of Representational Harm of South Sudanese Girls & WomenSouth Sudan gained independence from Sudan on 9 July 2011 as the outcome of a 2005 agreement that ended Africa's longest-running civil war. Made up of the ten southernmost states of Sudan, South Sudan is one of the most diverse countries in Africa. It is home to over 60 different major ethnic groups. Independence did not bring conflict in South Sudan to an end. Civil war broke out in 2013 when the president fell out with his then-vice president, leading to a conflict that has displaced some four million people. A power-sharing agreement was signed between the warring parties in August 2018 in a bid to bring the five-year civil war to an end.South Sudan has been known as a country ravaged by war and negatively portrayed in the media. With the world being a global village, the proliferation of technology and the recent introduction of Generative AI exaggerated gender biases have amplified stereotypes. An extreme example was created when BuzzFeed published an article about A.I.-generated Barbies from different countries worldwide. The results contained severe forms of representational bias - including colorist and racist depictions, which AI image generators are often prone to doing. Notably, the Barbie from South Sudan was depicted holding a rifle by her side.Based on this case study, we will focus on building an open-science Bari Language generative AI dataset that inclusively represents South Sudanese girls and women. |
Yine Nyika 🔗 |
-
|
A Case Study of Representational Harm of South Sudanese Girls & Women
(
Oral
)
>
link
A Case Study of Representational Harm of South Sudanese Girls & WomenSouth Sudan gained independence from Sudan on 9 July 2011 as the outcome of a 2005 agreement that ended Africa's longest-running civil war. Made up of the ten southernmost states of Sudan, South Sudan is one of the most diverse countries in Africa. It is home to over 60 different major ethnic groups. Independence did not bring conflict in South Sudan to an end. Civil war broke out in 2013 when the president fell out with his then-vice president, leading to a conflict that has displaced some four million people. A power-sharing agreement was signed between the warring parties in August 2018 in a bid to bring the five-year civil war to an end.South Sudan has been known as a country ravaged by war and negatively portrayed in the media. With the world being a global village, the proliferation of technology and the recent introduction of Generative AI exaggerated gender biases have amplified stereotypes. An extreme example was created when BuzzFeed published an article about A.I.-generated Barbies from different countries worldwide. The results contained severe forms of representational bias - including colorist and racist depictions, which AI image generators are often prone to doing. Notably, the Barbie from South Sudan was depicted holding a rifle by her side.Based on this case study, we will focus on building an open-science Bari Language generative AI dataset that inclusively represents South Sudanese girls and women. |
Yine Nyika 🔗 |
-
|
Hindi/Hinglish words used in Gen AI
(
Poster
)
>
link
Hindi is one of the most common languages spoken by around 57% of people in the Indian subcontinent. Therefore it becomes quite important that the generative AI is adaptive in understanding and responding well to the prompts thrown at it and not have a bias. While sending prompts on Bard for Gender neutral words like “shishya” (which translates to student/pupil in English) it shows only male students neglecting females. Not only that but it takes us back to the old days when there was a guru-shishya relationship and not the current day student. During the observations, sometimes the tool will not even send any response even if it understood the prompt and just say “sorry” in Hindi. Or in other cases for example “एशियाई डॉक्टरों की तस्वीरें” on Stable Diffusion and even Bard would only give either doctors from East Asian countries or images of males. There is no visibility of West Asian communities or females. This is real unfairness in the systems and training data sets which can be worked on and changed. The LLM models need to the trained on local languages with multiple sets of data while covering larger areas and communities and not opressing sections and terminologies(slangs) used in different parts of the society. There must be fairness in the system for more faith in the AI and for people to use it to their benefit. If we talk about not having any bias between humans we should make sure AI does the same too. |
Tapasya Sariya 🔗 |
-
|
Hindi/Hinglish words used in Gen AI
(
Oral
)
>
link
Hindi is one of the most common languages spoken by around 57% of people in the Indian subcontinent. Therefore it becomes quite important that the generative AI is adaptive in understanding and responding well to the prompts thrown at it and not have a bias. While sending prompts on Bard for Gender neutral words like “shishya” (which translates to student/pupil in English) it shows only male students neglecting females. Not only that but it takes us back to the old days when there was a guru-shishya relationship and not the current day student. During the observations, sometimes the tool will not even send any response even if it understood the prompt and just say “sorry” in Hindi. Or in other cases for example “एशियाई डॉक्टरों की तस्वीरें” on Stable Diffusion and even Bard would only give either doctors from East Asian countries or images of males. There is no visibility of West Asian communities or females. This is real unfairness in the systems and training data sets which can be worked on and changed. The LLM models need to the trained on local languages with multiple sets of data while covering larger areas and communities and not opressing sections and terminologies(slangs) used in different parts of the society. There must be fairness in the system for more faith in the AI and for people to use it to their benefit. If we talk about not having any bias between humans we should make sure AI does the same too. |
Tapasya Sariya 🔗 |
-
|
PERFORMANCE EVALUATION OF LARGE LANGUAGE MODELS IN MACHINE TRANSLATION AND TEXT CLASSIFICATION TASKS ON TWO GHANAIAN LANGUAGE DATASETS, TWI AND DAGBANI, AND THE ACADEMIC (MIS)USE CASES OF GENERATIVE AI IN GHANAIAN TERTIARY EDUCATION
(
Poster
)
>
link
The potential transformative capabilities of large language models in education, especially for developing countries could be enormous. Machine translation and classification for Ghanaian languages is a persisting challenge due to the scarcity of requisite datasets. However, the dearth of datasets for low-resourced languages poses a significant challenge to achieving the perceived benefits. We present ongoing research on assessing the adaptability of large language models to Ghanaian languages and the extent to which GenAI influences tertiary education in Ghana. We are curating datasets in Twi and Dagbani to support large language models and assessing the current state of generative AI in Ghanaian tertiary education. The proposed datasets could be utilized in downstream tasks such as named entity recognition, part-of-speech tagging, question answering and text classification. |
Rose-Mary Owusuaa Mensah Gyening 🔗 |
-
|
PERFORMANCE EVALUATION OF LARGE LANGUAGE MODELS IN MACHINE TRANSLATION AND TEXT CLASSIFICATION TASKS ON TWO GHANAIAN LANGUAGE DATASETS, TWI AND DAGBANI, AND THE ACADEMIC (MIS)USE CASES OF GENERATIVE AI IN GHANAIAN TERTIARY EDUCATION
(
Oral
)
>
link
The potential transformative capabilities of large language models in education, especially for developing countries could be enormous. Machine translation and classification for Ghanaian languages is a persisting challenge due to the scarcity of requisite datasets. However, the dearth of datasets for low-resourced languages poses a significant challenge to achieving the perceived benefits. We present ongoing research on assessing the adaptability of large language models to Ghanaian languages and the extent to which GenAI influences tertiary education in Ghana. We are curating datasets in Twi and Dagbani to support large language models and assessing the current state of generative AI in Ghanaian tertiary education. The proposed datasets could be utilized in downstream tasks such as named entity recognition, part-of-speech tagging, question answering and text classification. |
Rose-Mary Owusuaa Mensah Gyening 🔗 |
-
|
LLMS ampplified GenAI based receommender systems for Kannada
(
Poster
)
>
link
Recommender systems wield a significant influence on society, particularly in regions like India where providing recommendations in Kannada, the local language, serves a wide user base. These systems play a crucial role in bridging the last mile connectivity gap. Typically, there are two primary approaches: content-based and collaborative filtering. Content-based methods leverage features like movie genres, while collaborative systems rely on user ratings. However, an innovative approach emerges with the utilization of Language Models (LMs). These models possess the unique ability to comprehend content in Kannada, enabling them to decipher movie intents and themes. This marks a departure from the traditional paradigms of recommendation systems. By employing Generative AI chatbots integrated with these LMs, a transformative solution comes to light. This AI-driven chatbot can seamlessly offer movie recommendations, eliminating the necessity for English proficiency or access to a personal computer. The significance of this advancement lies in extending recommendations to individuals who are not well-versed in English. This empowers a broader audience, enabling them to access personalized movie suggestions effortlessly. Consequently, the fusion of Language Models and recommender systems represents an ingenious stride towards inclusivity and accessibility. Through this fusion, barriers are dismantled, and the power of recommendations becomes democratized, fostering a more enriched entertainment experience for everyone. |
Sneha Thippeswamy · Ramesh Thippeswamy · Yashaswini Viswanath 🔗 |
-
|
LLMS ampplified GenAI based receommender systems for Kannada
(
Oral
)
>
link
Recommender systems wield a significant influence on society, particularly in regions like India where providing recommendations in Kannada, the local language, serves a wide user base. These systems play a crucial role in bridging the last mile connectivity gap. Typically, there are two primary approaches: content-based and collaborative filtering. Content-based methods leverage features like movie genres, while collaborative systems rely on user ratings. However, an innovative approach emerges with the utilization of Language Models (LMs). These models possess the unique ability to comprehend content in Kannada, enabling them to decipher movie intents and themes. This marks a departure from the traditional paradigms of recommendation systems. By employing Generative AI chatbots integrated with these LMs, a transformative solution comes to light. This AI-driven chatbot can seamlessly offer movie recommendations, eliminating the necessity for English proficiency or access to a personal computer. The significance of this advancement lies in extending recommendations to individuals who are not well-versed in English. This empowers a broader audience, enabling them to access personalized movie suggestions effortlessly. Consequently, the fusion of Language Models and recommender systems represents an ingenious stride towards inclusivity and accessibility. Through this fusion, barriers are dismantled, and the power of recommendations becomes democratized, fostering a more enriched entertainment experience for everyone. |
Sneha Thippeswamy · Ramesh Thippeswamy · Yashaswini Viswanath 🔗 |
-
|
Indian illness and Indian participants for Genome sequencing using Generative AI
(
Poster
)
>
link
A recent advancement in medicine that has enormous potential for improving human health is referred to as "genomic medicine" more frequently now. This innovative method of healthcare identifies people who are more likely to develop certain diseases and intervenes earlier to prevent these diseases by using information of the genetic make-up of the individual. Finding the genes responsible for illness ethology will provide scientists the means to create more effective therapies and treatments. Predictive genomic medicine, which advocates screening healthy people to find those who have alleles that enhance their vulnerability to prevalent diseases like cancer and heart disease, is credited with playing a significant role in this discipline. Then, medical professionals could intervene even before the sickness manifests and provide them with advice.1As a first step towards genomic medicine, numerous nations have built databases on the DNA and health data of entire populations. Additionally, a sizable number of genes that could be used to predict a person's likelihood of getting a specific condition have been discovered through biomedical research. But since there are still numerous issues to be resolved, it would be naive to presume that genetic medicine will soon become a reality. Our understanding of the majority of illness genes and their functions is far insufficient to make accurate projections about a patient's likelihood of actually contracting the disease. In addition, new political, social, ethical, and economic problems brought on by genetic medicine will need to be resolved in the near future.AlphaFold can accurately predict 3D models of protein structures and is accelerating research in nearly every field of biology started in India 2016 . Currently, there are over 200 million known proteins, with many more found every year. Each one has a unique 3D shape that determines how it works and what it does.But figuring out the exact structure of a protein remains an expensive and often time-consuming process – and until now – scientists have only been able to study the exact 3D structure of a tiny fraction of the proteins known to science.Finding ways to close this rapidly expanding gap and predict the structure of millions of unknown proteins can not only help us tackle disease, and more quickly find new medicines, but perhaps, also unlock the mysteries of how life itself works.A model trained on the human genome, for example, was able to predict sites on RNA where proteins are likely to bind. This binding is important in the process of “gene expression” — the conversion of DNA into proteins. Specific proteins bind to RNA, limiting how much of it is then further translated into proteins. In this way, these proteins are said to mediate gene expression. To be able to predict these interactions, the model needed to intuit not just where in the genome these interactions will take place but also how the RNA will fold, as its shape is critical to such interactions.The generative capabilities of DNA language models also allow researchers to predict how new mutations may arise in genome sequences. For example, scientists developed a genome-scale language model to predict and reconstruct the evolution of the SARS-CoV-2 virus.Indian illness like Chikungunya and which affect villages and cripple people are less studied and these diseases of Global South needs specific advancements to the current mechanics. This abstract is to create awareness to conduct experiments on Indian illnesses using Indian speciment and hence hanvin a IndiGenAI for Genore sequences which not only caters for Indian illnesses but also considers indian participants in the experimentation. We want to bring awareness of AI scientists |
Yashaswini Viswanath · Dr Meenakshi S · Pavitra T · L Devika 🔗 |
-
|
Indian illness and Indian participants for Genome sequencing using Generative AI
(
Oral
)
>
link
A recent advancement in medicine that has enormous potential for improving human health is referred to as "genomic medicine" more frequently now. This innovative method of healthcare identifies people who are more likely to develop certain diseases and intervenes earlier to prevent these diseases by using information of the genetic make-up of the individual. Finding the genes responsible for illness ethology will provide scientists the means to create more effective therapies and treatments. Predictive genomic medicine, which advocates screening healthy people to find those who have alleles that enhance their vulnerability to prevalent diseases like cancer and heart disease, is credited with playing a significant role in this discipline. Then, medical professionals could intervene even before the sickness manifests and provide them with advice.1As a first step towards genomic medicine, numerous nations have built databases on the DNA and health data of entire populations. Additionally, a sizable number of genes that could be used to predict a person's likelihood of getting a specific condition have been discovered through biomedical research. But since there are still numerous issues to be resolved, it would be naive to presume that genetic medicine will soon become a reality. Our understanding of the majority of illness genes and their functions is far insufficient to make accurate projections about a patient's likelihood of actually contracting the disease. In addition, new political, social, ethical, and economic problems brought on by genetic medicine will need to be resolved in the near future.AlphaFold can accurately predict 3D models of protein structures and is accelerating research in nearly every field of biology started in India 2016 . Currently, there are over 200 million known proteins, with many more found every year. Each one has a unique 3D shape that determines how it works and what it does.But figuring out the exact structure of a protein remains an expensive and often time-consuming process – and until now – scientists have only been able to study the exact 3D structure of a tiny fraction of the proteins known to science.Finding ways to close this rapidly expanding gap and predict the structure of millions of unknown proteins can not only help us tackle disease, and more quickly find new medicines, but perhaps, also unlock the mysteries of how life itself works.A model trained on the human genome, for example, was able to predict sites on RNA where proteins are likely to bind. This binding is important in the process of “gene expression” — the conversion of DNA into proteins. Specific proteins bind to RNA, limiting how much of it is then further translated into proteins. In this way, these proteins are said to mediate gene expression. To be able to predict these interactions, the model needed to intuit not just where in the genome these interactions will take place but also how the RNA will fold, as its shape is critical to such interactions.The generative capabilities of DNA language models also allow researchers to predict how new mutations may arise in genome sequences. For example, scientists developed a genome-scale language model to predict and reconstruct the evolution of the SARS-CoV-2 virus.Indian illness like Chikungunya and which affect villages and cripple people are less studied and these diseases of Global South needs specific advancements to the current mechanics. This abstract is to create awareness to conduct experiments on Indian illnesses using Indian speciment and hence hanvin a IndiGenAI for Genore sequences which not only caters for Indian illnesses but also considers indian participants in the experimentation. We want to bring awareness of AI scientists |
Yashaswini Viswanath · Dr Meenakshi S · Pavitra T · L Devika 🔗 |
-
|
LLM: Patient-centred communication in colorectal cancer treatment
(
Poster
)
>
link
Prior studies have established the active involvement of patients' disparities in the treatment of colorectal cancer (CRC) among underrepresented classes of society. Nevertheless, an examination of these roles using generative AI tools has not been explicitly conducted. Establishing a communication tone between patients and the treatment team for colorectal cancer is an essential component of clinical practice, as it substantially influences the efficacy of colorectal cancer treatment. The objective is to identify the textual tones of the keyphrases commonly used by doctors in answering commonly asked questions by patients during colorectal cancer treatment. We used a scientific article corpus sourced from Medline, Cochrane, the Web of Science, and Pubmed to train a miniature GPT model using KerasNLP. Subsequently, we extracted the predominant keyphrases that are closely linked to these papers using vlT5. These keyphrases will be built into prompts with transformer agents and then fed into the trained miniature model to analyse and determine the tonality of the language using the sentiment analysis approach with BERT. The overall aim of this project is to provide guidance to clinicians regarding their communication style when interacting with underrepresented patients diagnosed with colorectal cancer. The implementation of an effective application model holds the capacity to significantly influence the treatment of colorectal cancer, particularly in terms of patient-centred communications, thereby yielding advantageous patient-centred outcomes for disadvantaged groups. |
Mary Adewunmi 🔗 |
-
|
LLM: Patient-centred communication in colorectal cancer treatment
(
Oral
)
>
link
Prior studies have established the active involvement of patients' disparities in the treatment of colorectal cancer (CRC) among underrepresented classes of society. Nevertheless, an examination of these roles using generative AI tools has not been explicitly conducted. Establishing a communication tone between patients and the treatment team for colorectal cancer is an essential component of clinical practice, as it substantially influences the efficacy of colorectal cancer treatment. The objective is to identify the textual tones of the keyphrases commonly used by doctors in answering commonly asked questions by patients during colorectal cancer treatment. We used a scientific article corpus sourced from Medline, Cochrane, the Web of Science, and Pubmed to train a miniature GPT model using KerasNLP. Subsequently, we extracted the predominant keyphrases that are closely linked to these papers using vlT5. These keyphrases will be built into prompts with transformer agents and then fed into the trained miniature model to analyse and determine the tonality of the language using the sentiment analysis approach with BERT. The overall aim of this project is to provide guidance to clinicians regarding their communication style when interacting with underrepresented patients diagnosed with colorectal cancer. The implementation of an effective application model holds the capacity to significantly influence the treatment of colorectal cancer, particularly in terms of patient-centred communications, thereby yielding advantageous patient-centred outcomes for disadvantaged groups. |
Mary Adewunmi 🔗 |
-
|
Generative AI: A boon or bane to the Tamil community
(
Poster
)
>
link
Tamil is one of the oldest classical languages in the world, with a rich history in literature, music, and the fine arts. Approximately 1.06% of the world's population speaks Tamil, with around 84.12 million being native speakers. There are many grassroots-level creators who are writers, poets, thought leaders, educators, and innovators contributing to the preservation and perpetuation of Tamil’s heritage and culture. The language is at risk of losing its roots due to technological advancement and English being the widely used mode of business communication across the globe. Adding to that fear is the introduction of generative AI (Gen AI), which is a fundamental paradigm shift from the traditional AI which is largely about organizing information created by humans for easy retrieval and ranking of relevant information and providing actionable insights and recommendations.Gen AI models are projected as a replacement for human creative capabilities owing to their ability to emulate skills that only humans may possess, and they affect the livelihood of the creators whose primary source of income is their creative content. Imagine if anyone could use Gen AI tools to generate plots or stories in the style of a famous author. Wouldn’t it affect the author’s only source of income? Linguists, who work as data annotators, may be affected if Gen AI is used as a replacement for human annotators. The irony is that the creators and the linguists are not even aware of their jobs being affected because of technology and have no clue that the data used in creating such Gen AI tools comes from them without compensating them for their work. Identifying ways to educate the creators and linguists to embrace technology to accentuate their creativity while compensating them fairly for the use of their content and to leverage Gen AI in their workflow is the need of the hour. There is no stopping Gen AI and making the community aware of its impact is essential. This can be achieved through the democratization of AI by building hyperlocal communities and educating them to use the technology for solving their own problems while addressing and mitigating the potential threats they pose through regulations. Gen AI can be used as a medium to build a knowledge base, leveraging the rich Tamil literature and creative story-telling tools to improve the workflow of the creators, which in turn helps the language co-exist with technology. A community's culture, values, and way of life are captured, communicated, preserved, and passed on via its literature, music, and various fine arts, including visual media. Gen AI’s impact on these aspects will have profound significance for how a community and its citizens evolve. This poster discusses some strategies that can be implemented to counter the negative impacts on the Tamil community caused by Gen AI. |
Abinaya Mahendiran 🔗 |
-
|
Generative AI: A boon or bane to the Tamil community
(
Oral
)
>
link
Tamil is one of the oldest classical languages in the world, with a rich history in literature, music, and the fine arts. Approximately 1.06% of the world's population speaks Tamil, with around 84.12 million being native speakers. There are many grassroots-level creators who are writers, poets, thought leaders, educators, and innovators contributing to the preservation and perpetuation of Tamil’s heritage and culture. The language is at risk of losing its roots due to technological advancement and English being the widely used mode of business communication across the globe. Adding to that fear is the introduction of generative AI (Gen AI), which is a fundamental paradigm shift from the traditional AI which is largely about organizing information created by humans for easy retrieval and ranking of relevant information and providing actionable insights and recommendations.Gen AI models are projected as a replacement for human creative capabilities owing to their ability to emulate skills that only humans may possess, and they affect the livelihood of the creators whose primary source of income is their creative content. Imagine if anyone could use Gen AI tools to generate plots or stories in the style of a famous author. Wouldn’t it affect the author’s only source of income? Linguists, who work as data annotators, may be affected if Gen AI is used as a replacement for human annotators. The irony is that the creators and the linguists are not even aware of their jobs being affected because of technology and have no clue that the data used in creating such Gen AI tools comes from them without compensating them for their work. Identifying ways to educate the creators and linguists to embrace technology to accentuate their creativity while compensating them fairly for the use of their content and to leverage Gen AI in their workflow is the need of the hour. There is no stopping Gen AI and making the community aware of its impact is essential. This can be achieved through the democratization of AI by building hyperlocal communities and educating them to use the technology for solving their own problems while addressing and mitigating the potential threats they pose through regulations. Gen AI can be used as a medium to build a knowledge base, leveraging the rich Tamil literature and creative story-telling tools to improve the workflow of the creators, which in turn helps the language co-exist with technology. A community's culture, values, and way of life are captured, communicated, preserved, and passed on via its literature, music, and various fine arts, including visual media. Gen AI’s impact on these aspects will have profound significance for how a community and its citizens evolve. This poster discusses some strategies that can be implemented to counter the negative impacts on the Tamil community caused by Gen AI. |
Abinaya Mahendiran 🔗 |
-
|
Rakshak: Kannada city wide smart city solution using LLM chatbots
(
Poster
)
>
link
RAKSHAKIntroduction: Pedestrians of Indian cities often see many terrible accidents on roads. They have to often contact an emergency helpline which often takes some time to arrive at the site, this long time can be reduced by installing devices on places like traffic lights which can report to the nearest help center in case of an accident in a certain radius. It can also connect to an app in which people also can report accidents and call to the nearest emergency center and contact them in Kannada or any other regional language through which street vendors, auto drivers and other people can communicate. They can also contact the nearest Hospitals, Police Stations and Emergency numbers fed in the app. This can minimize the deaths caused by the accidents and help people.Methods: Devices can be installed on places like TRAFFIC LIGHTS, which can help to report accidents in a certain radius using devices such as Accelerometer Sensor which can be prepared by a fixed electrode comprised of Si, a working electrode, and spring components.This can be reported to the nearest emergency center through an app directly through the instruments installed or by people who was witnessed the accidents in Kannada, English or any other regional language that people like auto drivers, vendors can communicate. Language Translators can be added to this app so that the app can be used in any Indian Language.This app can be connected to the nearest hospitals, police stations, fire stations etc. which makes it easier to save the people from accidents. Through this app Traffic Police can get to know maximum accident-prone areas and can minimize these accidents.They can also know the average accidents caused through the app.Result:Using this app, we can know the accidents caused in the street and minimize it.It can, not only be used to report road accidents, but also crimes like Chain snatching or fire accidents etc.Ultimately from this app and the sensors used we can save many lives. The role of Generative AI in this projects is: LLMs based chatbots that connect city wide. These LLMs talk to each other and understand the situation. They relay information from point to point. City wide blanket LLM that can act as a monitor to the entire gamut of devices and sensors. Language transltors using LLMS all over help in native language speakers. |
Mayank Dharani 🔗 |
-
|
Rakshak: Kannada city wide smart city solution using LLM chatbots
(
Oral
)
>
link
RAKSHAKIntroduction: Pedestrians of Indian cities often see many terrible accidents on roads. They have to often contact an emergency helpline which often takes some time to arrive at the site, this long time can be reduced by installing devices on places like traffic lights which can report to the nearest help center in case of an accident in a certain radius. It can also connect to an app in which people also can report accidents and call to the nearest emergency center and contact them in Kannada or any other regional language through which street vendors, auto drivers and other people can communicate. They can also contact the nearest Hospitals, Police Stations and Emergency numbers fed in the app. This can minimize the deaths caused by the accidents and help people.Methods: Devices can be installed on places like TRAFFIC LIGHTS, which can help to report accidents in a certain radius using devices such as Accelerometer Sensor which can be prepared by a fixed electrode comprised of Si, a working electrode, and spring components.This can be reported to the nearest emergency center through an app directly through the instruments installed or by people who was witnessed the accidents in Kannada, English or any other regional language that people like auto drivers, vendors can communicate. Language Translators can be added to this app so that the app can be used in any Indian Language.This app can be connected to the nearest hospitals, police stations, fire stations etc. which makes it easier to save the people from accidents. Through this app Traffic Police can get to know maximum accident-prone areas and can minimize these accidents.They can also know the average accidents caused through the app.Result:Using this app, we can know the accidents caused in the street and minimize it.It can, not only be used to report road accidents, but also crimes like Chain snatching or fire accidents etc.Ultimately from this app and the sensors used we can save many lives. The role of Generative AI in this projects is: LLMs based chatbots that connect city wide. These LLMs talk to each other and understand the situation. They relay information from point to point. City wide blanket LLM that can act as a monitor to the entire gamut of devices and sensors. Language transltors using LLMS all over help in native language speakers. |
Mayank Dharani 🔗 |
-
|
Machine Doctor: Borderline Scema Therapy using GenAI for Indian Rural women
(
Poster
)
>
link
Schema therapy aims to address maladaptive schemas, which can contribute to mental health conditions. The psychologist Jeffrey E. Young originally developed schema therapy to treat personality disorders, but therapists have since used it to manage a wide range of conditions .Schema therapy is a newer type of therapy that combines elements of cognitive behavioral therapy (CBT), psychoanalysis, attachment theory, and emotion-focused therapy, among others.It’s an integrative approach that aims to treat personality disorders and other mental health concerns that don’t always respond to other treatment options. It can be particularly useful for treating borderline personality disorder.These therapies are not available to Countries like India where Mental Illness is a taboo. Women fear to be treated as they could be branded as mentally ill and sick. The society castigates or isolates such women from the mainstream society. The solution is to provide treatment in a discreet manner. In this abstract we are focussing on Borderline Personality disorder for which the treatment Schema Based Therapy is not available in rural areas, not can they come to the main city for treatment with the knowledge of their families. Solution proposed is a Chatbot in native languages using LLMs where the expert system has the Young Questionnaire in native languages and scores the maladaptive schemas. Once that schema is identified, information on reparenting or repairing the maladaptive schema is provided in native language. This gives them privacy from the family and also eliminates the money required to come for city for treatment. This GenAI app can be connected to mental health hospitals in metro cities and they can provide treatment if needed further. The main benefit is the information is provided in their mother tongue, native languages which help them immensely. |
Yashaswini Viswanath · Pavitra T · Dr Meenakshi S 🔗 |
-
|
Machine Doctor: Borderline Scema Therapy using GenAI for Indian Rural women
(
Oral
)
>
link
Schema therapy aims to address maladaptive schemas, which can contribute to mental health conditions. The psychologist Jeffrey E. Young originally developed schema therapy to treat personality disorders, but therapists have since used it to manage a wide range of conditions .Schema therapy is a newer type of therapy that combines elements of cognitive behavioral therapy (CBT), psychoanalysis, attachment theory, and emotion-focused therapy, among others.It’s an integrative approach that aims to treat personality disorders and other mental health concerns that don’t always respond to other treatment options. It can be particularly useful for treating borderline personality disorder.These therapies are not available to Countries like India where Mental Illness is a taboo. Women fear to be treated as they could be branded as mentally ill and sick. The society castigates or isolates such women from the mainstream society. The solution is to provide treatment in a discreet manner. In this abstract we are focussing on Borderline Personality disorder for which the treatment Schema Based Therapy is not available in rural areas, not can they come to the main city for treatment with the knowledge of their families. Solution proposed is a Chatbot in native languages using LLMs where the expert system has the Young Questionnaire in native languages and scores the maladaptive schemas. Once that schema is identified, information on reparenting or repairing the maladaptive schema is provided in native language. This gives them privacy from the family and also eliminates the money required to come for city for treatment. This GenAI app can be connected to mental health hospitals in metro cities and they can provide treatment if needed further. The main benefit is the information is provided in their mother tongue, native languages which help them immensely. |
Yashaswini Viswanath · Pavitra T · Dr Meenakshi S 🔗 |
-
|
System design for Transcribing Tamil Songs to overcome language barriers
(
Poster
)
>
link
This paper presents a transformative system designed to bridge language barriers by automatically transcribing Tamil songs into English lyrics through audio analysis. By harnessing cutting-edge audio processing and natural language translation techniques, the system enables the conversion of Tamil song vocals into meaningful English lyrics, thereby expanding cross-cultural accessibility and appreciation of Tamil music.The system's architecture involves training a sophisticated audio recognition model on a diverse dataset of Tamil songs. Through spectral analysis and linguistic pattern recognition, the model identifies vocal segments and phonetic structures, accurately capturing the essence of the original lyrics. Subsequently, a translation component powered by advanced machine translation methods converts the phonetic representations into coherent English lyrics while preserving the emotional and thematic nuances of the song.This innovation opens avenues for international audiences to engage with Tamil music in a meaningful way, transcending language barriers. Moreover, it offers a tool for language learners and enthusiasts to delve into the linguistic intricacies of Tamil songs, promoting cultural exchange and appreciation.Although challenges related to nuanced translation and cultural context arise, this paper underscores the immense potential of the proposed system to bridge linguistic gaps, foster intercultural connections, and contribute to the global music landscape. By amalgamating music, technology, and language translation, this system paves the way for a more inclusive and interconnected musical experience. |
Suresh Lokiah 🔗 |
-
|
System design for Transcribing Tamil Songs to overcome language barriers
(
Oral
)
>
link
This paper presents a transformative system designed to bridge language barriers by automatically transcribing Tamil songs into English lyrics through audio analysis. By harnessing cutting-edge audio processing and natural language translation techniques, the system enables the conversion of Tamil song vocals into meaningful English lyrics, thereby expanding cross-cultural accessibility and appreciation of Tamil music.The system's architecture involves training a sophisticated audio recognition model on a diverse dataset of Tamil songs. Through spectral analysis and linguistic pattern recognition, the model identifies vocal segments and phonetic structures, accurately capturing the essence of the original lyrics. Subsequently, a translation component powered by advanced machine translation methods converts the phonetic representations into coherent English lyrics while preserving the emotional and thematic nuances of the song.This innovation opens avenues for international audiences to engage with Tamil music in a meaningful way, transcending language barriers. Moreover, it offers a tool for language learners and enthusiasts to delve into the linguistic intricacies of Tamil songs, promoting cultural exchange and appreciation.Although challenges related to nuanced translation and cultural context arise, this paper underscores the immense potential of the proposed system to bridge linguistic gaps, foster intercultural connections, and contribute to the global music landscape. By amalgamating music, technology, and language translation, this system paves the way for a more inclusive and interconnected musical experience. |
Suresh Lokiah 🔗 |
-
|
Rangoli meets Picaso: Inspirations or Hallucinations?
(
Poster
)
>
link
India is a land of ancient traditions and one of them is the act of cleansing the house and putting Rangoli in front of the house. More so during festivities. These have dots and lines and use creativity which we now study as graph theory. But the women who put these are adept at creating it without any formal education of Maths. Rangolis also are filled these days and take curves and objects to create an art piece on the ground with coloured flour. They create this from memory and acts as a therapy of mindfulness. This demo is to show how Rangoli from India can be viewed from a Global North angle and how the different art revolutions could possibly influence and create Rangoli. This is made possible by Stable Fusion which takes text prompts and creates images with different art styles.What are we trying to achieve here? Why should this be displayed in NeurIPS?The very act of influence is startling and amusing. This opens doors for conversations around art styles and traditions and their possibilities. An act of wonder to bewilder the mind of those who see I have attached the art pieces which can be a visual demo at the conference |
Yashaswini Viswanath · Dr Meenakshi S · Pavitra T 🔗 |
-
|
Rangoli meets Picaso: Inspirations or Hallucinations?
(
Oral
)
>
link
India is a land of ancient traditions and one of them is the act of cleansing the house and putting Rangoli in front of the house. More so during festivities. These have dots and lines and use creativity which we now study as graph theory. But the women who put these are adept at creating it without any formal education of Maths. Rangolis also are filled these days and take curves and objects to create an art piece on the ground with coloured flour. They create this from memory and acts as a therapy of mindfulness. This demo is to show how Rangoli from India can be viewed from a Global North angle and how the different art revolutions could possibly influence and create Rangoli. This is made possible by Stable Fusion which takes text prompts and creates images with different art styles.What are we trying to achieve here? Why should this be displayed in NeurIPS?The very act of influence is startling and amusing. This opens doors for conversations around art styles and traditions and their possibilities. An act of wonder to bewilder the mind of those who see I have attached the art pieces which can be a visual demo at the conference |
Yashaswini Viswanath · Dr Meenakshi S · Pavitra T 🔗 |
-
|
Farmer's Friend: IoT Fusion using Generative AI for farmer chatbot in Kannada
(
Poster
)
>
link
FARMERS FRIEND Introduction: Farmers in India have to bear harsh climatic conditions just to check their farms, switch on/off the pumps and other small agricultural practices. Though we have tractors, irrigation systems etc. we also require few other instruments or modern technology in the field of agriculture to help farmers grow food for us.Methods: 1. Farmers have to go to the field to check motors, switch on/off the water pumps, soil fertility, plants health, adding manures, fertilizers, even to remove weeds etc.2. They have to bear extreme climatic conditions and also face problems during heavy rainfalls 3. Their problems can be reduced by installing devices or sensors in the field to check temperature, humidity, soil fertility, growth of pests etc. 4. By checking the soil fertility these devices can also tell the farmers which manure or fertilizer to use and in how much quantity.5. This can be done by providing an interface using LLM Chatbots which can connect over 5G networks where IoT Fusion can benefit the farmers and reduce their efforts.6. Weather predictions can also be done by installing Numerical weather prediction models to provide accurate prediction on the weather.7. This information can be reported to the farmers using an application in their mobile phones in their regional language like Kannada. 8. Farmers therefore can give commands to these devices through the application by sitting in their home.How to build Chatbot?It should be in native language and able to understand what farmers are talking about.Role of Generative AI:We are using Generative AI for translation services and chatbot creation.We want to take this idea to the Government of Karnataka and MeITy Ministry of Electronics and Information Technology. We want to raise funds and make this into a reality.Result:By using these applications, we can reduce the efforts of our farmers.These are the basic routines followed by the Indian farmers This application can be used to reduce the efforts of Indian farmers and help in the increase of production. |
Yashaswini Viswanath · Mayank Dharani 🔗 |
-
|
Farmer's Friend: IoT Fusion using Generative AI for farmer chatbot in Kannada
(
Oral
)
>
link
FARMERS FRIEND Introduction: Farmers in India have to bear harsh climatic conditions just to check their farms, switch on/off the pumps and other small agricultural practices. Though we have tractors, irrigation systems etc. we also require few other instruments or modern technology in the field of agriculture to help farmers grow food for us.Methods: 1. Farmers have to go to the field to check motors, switch on/off the water pumps, soil fertility, plants health, adding manures, fertilizers, even to remove weeds etc.2. They have to bear extreme climatic conditions and also face problems during heavy rainfalls 3. Their problems can be reduced by installing devices or sensors in the field to check temperature, humidity, soil fertility, growth of pests etc. 4. By checking the soil fertility these devices can also tell the farmers which manure or fertilizer to use and in how much quantity.5. This can be done by providing an interface using LLM Chatbots which can connect over 5G networks where IoT Fusion can benefit the farmers and reduce their efforts.6. Weather predictions can also be done by installing Numerical weather prediction models to provide accurate prediction on the weather.7. This information can be reported to the farmers using an application in their mobile phones in their regional language like Kannada. 8. Farmers therefore can give commands to these devices through the application by sitting in their home.How to build Chatbot?It should be in native language and able to understand what farmers are talking about.Role of Generative AI:We are using Generative AI for translation services and chatbot creation.We want to take this idea to the Government of Karnataka and MeITy Ministry of Electronics and Information Technology. We want to raise funds and make this into a reality.Result:By using these applications, we can reduce the efforts of our farmers.These are the basic routines followed by the Indian farmers This application can be used to reduce the efforts of Indian farmers and help in the increase of production. |
Yashaswini Viswanath · Mayank Dharani 🔗 |