Work Product Submission index
(Project, Contributor)
LG Wildfire Tracker, Gerard Monsó
Space Visualizations for Liquid Galaxy, Mattia Baggini
LG-AgroTaskManager, Davit Mas
Super Liquid Galaxy controller, Aritra Biswas
Super Liquid Galaxy controller, Aritra Biswas
LG AI Travel Itinerary Generator, Rohit Kumar
LG AI Touristic Explorer, Manas Dalvi
Android, Chrome and AI Application management, Oscar Pena
Gemini AI Touristic tool, Mahinour Elsarky
LG AI Voices, Ryan Kim
LG AIS Visualization, Rofayda Bassem
LG Wildfire Tracker
Gerard Monsó
A short description of the goals of the project.
The first part is intended to represent past forest fires, to see all the affected
areas, the development of the forest fire, and all the information represented on
the Liquid Galaxy.
The second part will be to visualize real-time fires, whether they are forest fires or
house fires, on the Liquid Galaxy and see their information.
In the third part of the project, what is dealt with is to represent the risks of forest fires in the United States area.
What you did.
The project will be a responsive Flutter application designed for tablets, which
through an SSH connection and API calls, we will be able to represent the desired
information on the LG. Additionally, the application will be autonomous, and
without being connected to the LG, it will be able to represent and display
information on the mobile device itself.
The current state.
The three parts of the project are implemented and in principle, it is 100% finished.
What's left to do.
The project could be expanded to include the water points of the territory because in this way we would obtain a very strong and valuable application.
What code got merged (or not) upstream
The latest version of the application has been published with all the improvements that the testers told me.
you can use all three parts of the project without problem.
Any challenges or important things you learned during the project.
Especially with the SSH connections and the design of the app, as it took a lot of work to get that part going.
Links to: Github, documentation, and Play Store link.
Github repo: https://github.com/LiquidGalaxyLAB/LG-Wildfire-tracker
Github board: https://github.com/users/gmonso/projects/6/views/1
Final apk for tablets on drive: APK, or compile your own from code repo.
Readme Github repo: https://github.com/LiquidGalaxyLAB/LG-Wildfire-tracker/blob/main/README.md
A short description of the goals of the project.
Space Visualizations for Liquid Galaxy is an application that showcases the Mars 2020 NASA mission and some of the most famous Earth orbits. The application uses the Liquid Galaxy platform to provide immersive space exploration experiences. In the Mars mission section, users can interactively learn about the mission by visualizing 3D models, technical data, and the path of the Perseverance rover and Ingenuity drone. Users can see Mars from the perspective of the Perseverance rover with more than 220000 photos available. The photos can also be displayed on all Liquid Galaxy screens, providing a very immersive experience.
What you did.
I developed the application with help from my mentor, using Flutter, Dart and other different technologies. KMLs were used to show elements on Liquid Galaxy screens and tablet maps. The application was tested every week with support from the Liquid Galaxy headquarters. It includes all the needed documentation, making it easy for new open-source developers to understand and work on.
The current state.
All the features of the application have been successfully implemented and the documentation is updated with the project.
What's left to do.
The application can be translated into other languages by open source contributors. To facilitate this, I have created a text constants file that contains all the text used in the application. This file will serve as the starting point for any translations.
What code got merged (or not) upstream
The code is publicly available in the Liquid Galaxy project GitHub repository, along with all the necessary documentation.
Any challenges or important things you learned during the project.
The most challenging part of the project was creating a map widget for the tablet app that displays KML files and 2D orbits. To achieve this, I customized the Flutter Maps library to handle KML rendering and developed an algorithm to convert 3D KML orbit coordinates into 2D polylines for accurate map display. Throughout the project, I gained many technical skills and learned effective approaches to project development, thanks to my mentor's guidance and support from the community.
Links to: Github, documentation, and Play Store link.
GitHub: https://github.com/LiquidGalaxyLAB/LG-Space-Visualizations
Documentation: https://liquidgalaxylab.github.io/LG-Space-Visualizations/
Final apk for tablets on drive: APK, or compile your own from code repo.A short description of the goals of the project.
The first part is intended to represent past forest fires, to see all the affected
areas, the development of the forest fire, and all the information represented on
the Liquid Galaxy.
The second part will be to visualize real-time fires, whether they are forest fires or
house fires, on the Liquid Galaxy and see their information.
In the third part of the project, what is dealt with is to represent the risks of forest fires in the United States area.
What you did.
The project will be a responsive Flutter application designed for tablets, which
through an SSH connection and API calls, we will be able to represent the desired
information on the LG. Additionally, the application will be autonomous, and
without being connected to the LG, it will be able to represent and display
information on the mobile device itself.
The current state.
The three parts of the project are implemented and in principle, it is 100% finished.
What's left to do.
The project could be expanded to include the water points of the territory because in this way we would obtain a very strong and valuable application.
What code got merged (or not) upstream
The latest version of the application has been published with all the improvements that the testers told me.
you can use all three parts of the project without problem.
Any challenges or important things you learned during the project.
Especially with the SSH connections and the design of the app, as it took a lot of work to get that part going.
Links to: Github, documentation, and Play Store link.
Github repo: https://github.com/LiquidGalaxyLAB/LG-Wildfire-tracker
Github board: https://github.com/users/gmonso/projects/6/views/1
Final apk for tablets on drive: APK, or compile your own from code repo.
Readme Github repo: https://github.com/LiquidGalaxyLAB/LG-Wildfire-tracker/blob/main/README.md
Space Visualizations for Liquid Galaxy
Mattia Baggini
A short description of the goals of the project.
Space Visualizations for Liquid Galaxy is an application that showcases the Mars 2020 NASA mission and some of the most famous Earth orbits. The application uses the Liquid Galaxy platform to provide immersive space exploration experiences. In the Mars mission section, users can interactively learn about the mission by visualizing 3D models, technical data, and the path of the Perseverance rover and Ingenuity drone. Users can see Mars from the perspective of the Perseverance rover with more than 220000 photos available. The photos can also be displayed on all Liquid Galaxy screens, providing a very immersive experience.
What you did.
I developed the application with help from my mentor, using Flutter, Dart and other different technologies. KMLs were used to show elements on Liquid Galaxy screens and tablet maps. The application was tested every week with support from the Liquid Galaxy headquarters. It includes all the needed documentation, making it easy for new open-source developers to understand and work on.
The current state.
All the features of the application have been successfully implemented and the documentation is updated with the project.
What's left to do.
The application can be translated into other languages by open source contributors. To facilitate this, I have created a text constants file that contains all the text used in the application. This file will serve as the starting point for any translations.
What code got merged (or not) upstream
The code is publicly available in the Liquid Galaxy project GitHub repository, along with all the necessary documentation.
Any challenges or important things you learned during the project.
The most challenging part of the project was creating a map widget for the tablet app that displays KML files and 2D orbits. To achieve this, I customized the Flutter Maps library to handle KML rendering and developed an algorithm to convert 3D KML orbit coordinates into 2D polylines for accurate map display. Throughout the project, I gained many technical skills and learned effective approaches to project development, thanks to my mentor's guidance and support from the community.
Links to: Github, documentation, and Play Store link.
GitHub: https://github.com/LiquidGalaxyLAB/LG-Space-Visualizations
Documentation: https://liquidgalaxylab.github.io/LG-Space-Visualizations/
LG Gemini AI Touristic Tool
Sidharth Mudgil
Short description of the goals of the project.
AI-powered Flutter app makes it simple to explore tourism information. Connected to the Liquid Galaxy system, the app allows users to ask questions about tourist spots using a chat interface or voice commands. It provides personalized itineraries, recommends tourist places, and explains the history and significance of each location. The app offers easy-to-use prompts like "What to Eat" and "Find Tourist Place," enabling users to quickly choose what interests them. For more specific needs, users can type their own questions in the chat. The app also supports multi-modal interactions, allowing users to engage through images, text, or voice-to-text, making it adaptable to different preferences.
What you did.
I developed an Android Application from scratch using Flutter, Dart and Google's Gemini AI. Gemini Model is use for AI features and KML for visualization on Liquid Galaxy. The application was tested every week with support from the Liquid Galaxy headquarters. It includes all the needed documentation, making it easy for new open-source developers to understand and work on.
The current state.
I developed an Android Application from scratch using Flutter, Dart and Google's Gemini AI. Gemini Model is use for AI features and KML for visualization on Liquid Galaxy. The application was tested every week with support from the Liquid Galaxy headquarters. It includes all the needed documentation, making it easy for new open-source developers to understand and work on.
The current state.
All the features are successfully implemented and working.
What's left to do.
What's left to do.
In Some rare cases orbits are not starting on first click. I will fix that by either changing ssh library or debugging my current code if there is any delay need to be added.
What code got merged (or not) upstream
The code is publicly available in the Liquid Galaxy project GitHub repository, along with all the necessary documentation.
The code is publicly available in the Liquid Galaxy project GitHub repository, along with all the necessary documentation.
Any challenges or important things you learned during the project.
The most challenging part of the project was a crash due to old ssh_2 library, With the help of mentor Prayag I was able to solve 1 crash and another crash I resolved by myself.
Links to Github, documentation, and Play Store link.
Github: https://github.com/LiquidGalaxyLAB/LG-Gemini-AI-Touristic-info-tool/
Documentation: https://github.com/LiquidGalaxyLAB/LG-Gemini-AI-Touristic-info-tool/blob/main/README.md
Final apk for tablets on drive: APK, or compile your own from code repo.
The most challenging part of the project was a crash due to old ssh_2 library, With the help of mentor Prayag I was able to solve 1 crash and another crash I resolved by myself.
Links to Github, documentation, and Play Store link.
Github: https://github.com/LiquidGalaxyLAB/LG-Gemini-AI-Touristic-info-tool/
Documentation: https://github.com/LiquidGalaxyLAB/LG-Gemini-AI-Touristic-info-tool/blob/main/README.md
Final apk for tablets on drive: APK, or compile your own from code repo.
Davit Mas
A short description of the goals of the project.
The main goal of this project is creating a tool that helps controlling agronomic robots and automatizing the tasks that a farmer may have in the field, representing the places where a robot works and what it is doing in Liquid Galaxy.
As a secondary task, I gave the option for every user to add Crops to a Database, with its Planting, Transplanting and Harvesting dates, so they get a reminder that those tasks should be done when the time comes. This can also be visualized in Liquid Galaxy.
What you did.
The project consists of a Flutter App, designed mainly for tablets. Using many complementary tools such as ISAR database in order to store the crops and the robots or SSH for the connection with Liquid Galaxy to further enhance the app.
The current state.
The app is for the most part finished.
What's left to do.
If there had been plenty more of time, the following could have been done:
Creating a script that automatically fills up the database.
What code got merged (or not) upstream
The code is publicly available in the Liquid Galaxy project GitHub repository, along with all the necessary documentation.
Any challenges or important things you learned during the project.
Learning the correct structure of a Flutter project was a struggle but it was very satisfying in the end, as was figuring out some of the technologies that we should use. IE: I used SQFLite until 2 months before the end date before switching to ISAR. Lastly, the SSH communications were a bit difficult on occasions.
Links to Github, documentation, and Play Store link.
Github repo: https://github.com/LiquidGalaxyLAB/LG-Agro-Task-Manager
Final apk for tablets on drive: APK, or compile your own from code repo.
Readme Github repo: https://github.com/LiquidGalaxyLAB/LG-Agro-Task-Manager/blob/main/README.md
Readme Github repo: https://github.com/LiquidGalaxyLAB/LG-Agro-Task-Manager/blob/main/README.md
Super Liquid Galaxy Controller
Aritra Biswas
Short Description of the goals of the project
The Super Liquid Galaxy Controller is an application designed to serve as the primary interface for interacting with the Liquid Galaxy rig, offering users a seamless and engaging experience. Built on the Flutter framework, the application enhances the capabilities of the Liquid Galaxy platform by integrating advanced features like custom KML building, immersive tour creation, and intuitive map-based controls. Users can easily explore the world, create detailed visualizations of geospatial data, and control the Liquid Galaxy rig effortlessly through voice commands or a user-friendly interface. Whether navigating through points of interest, building intricate tours, or manipulating the display with map movements, the Super Liquid Galaxy Controller provides an interactive and comprehensive toolset that enriches the visualization experience on the Liquid Galaxy platform.
What you did
I developed the Super Liquid Galaxy Controller application with guidance from my mentor, Yash Raj Bharti, utilizing Flutter, Dart, and various other technologies. KMLs were employed to display elements on both Liquid Galaxy screens and tablet maps. The application underwent weekly testing with support from the Liquid Galaxy headquarters to ensure its functionality and performance. It also integrates the GeoApify API to retrieve essential information for different screens and leverages Wikipedia API endpoints to gather detailed data about various points of interest.
The current state
All the core functionalities for the various goals of my project have been implemented and deployed onto Playstore.
What's left to do
The central core functionalities have all been implemented already. However, I plan to release some updates extending some of the features in future releases.
What code got merged (or not) upstream
The code is publicly available in the Liquid Galaxy project GitHub repository, along with all the necessary documentation.
Any challenges or important things you learned during the project?
My experience with GSoC 2024 has been incredibly educational, pushing me beyond my comfort zone to learn new skills and adopt innovative problem-solving approaches that I hadn’t considered before. The most challenging aspect was mastering the art of writing production-ready code, prepared for deployment. However, this challenge also proved to be the most rewarding, as it transformed my passions into practical, real-world skills.
Links
GitHub: LiquidGalaxyLAB/Super-Liquid-Galaxy-Controller: Aritra Biswas (github.com)
Short Description of the goals of the project
The Super Liquid Galaxy Controller is an application designed to serve as the primary interface for interacting with the Liquid Galaxy rig, offering users a seamless and engaging experience. Built on the Flutter framework, the application enhances the capabilities of the Liquid Galaxy platform by integrating advanced features like custom KML building, immersive tour creation, and intuitive map-based controls. Users can easily explore the world, create detailed visualizations of geospatial data, and control the Liquid Galaxy rig effortlessly through voice commands or a user-friendly interface. Whether navigating through points of interest, building intricate tours, or manipulating the display with map movements, the Super Liquid Galaxy Controller provides an interactive and comprehensive toolset that enriches the visualization experience on the Liquid Galaxy platform.
What you did
I developed the Super Liquid Galaxy Controller application with guidance from my mentor, Yash Raj Bharti, utilizing Flutter, Dart, and various other technologies. KMLs were employed to display elements on both Liquid Galaxy screens and tablet maps. The application underwent weekly testing with support from the Liquid Galaxy headquarters to ensure its functionality and performance. It also integrates the GeoApify API to retrieve essential information for different screens and leverages Wikipedia API endpoints to gather detailed data about various points of interest.
The current state
All the core functionalities for the various goals of my project have been implemented and deployed onto Playstore.
What's left to do
The central core functionalities have all been implemented already. However, I plan to release some updates extending some of the features in future releases.
What code got merged (or not) upstream
The code is publicly available in the Liquid Galaxy project GitHub repository, along with all the necessary documentation.
Any challenges or important things you learned during the project?
My experience with GSoC 2024 has been incredibly educational, pushing me beyond my comfort zone to learn new skills and adopt innovative problem-solving approaches that I hadn’t considered before. The most challenging aspect was mastering the art of writing production-ready code, prepared for deployment. However, this challenge also proved to be the most rewarding, as it transformed my passions into practical, real-world skills.
Links
GitHub: LiquidGalaxyLAB/Super-Liquid-Galaxy-Controller: Aritra Biswas (github.com)
LG AI Travel Itinerary Generator
Rohit Kumar
Project Goals:
The LG AI Travel Itinerary Generator makes travel planning easier by creating personalized and engaging stories about specific Points of Interest (POIs). Using the Groq AI model, the app helps users create custom travel itineraries that inspire their trips. It’s connected to Google Maps for real-time POI visualization and show the information about the place like temperature, the country it is located and the timezone etc.
The LG AI Travel Itinerary Generator makes travel planning easier by creating personalized and engaging stories about specific Points of Interest (POIs). Using the Groq AI model, the app helps users create custom travel itineraries that inspire their trips. It’s connected to Google Maps for real-time POI visualization and show the information about the place like temperature, the country it is located and the timezone etc.
What I Did:
I built the entire app from the ground up using Flutter and Dart. The app uses Groq's AI to create personalized travel stories and integrates with Google Maps for sub POI visualization, along with the feature to select the ai model listed over groq, It also connects to the LG Rig using Dartssh2. I tested and improved the app weekly with feedback from the Liquid Galaxy team. Additionally, I researched and fixed an issue where real devices sometimes couldn’t connect to the rigs.
I built the entire app from the ground up using Flutter and Dart. The app uses Groq's AI to create personalized travel stories and integrates with Google Maps for sub POI visualization, along with the feature to select the ai model listed over groq, It also connects to the LG Rig using Dartssh2. I tested and improved the app weekly with feedback from the Liquid Galaxy team. Additionally, I researched and fixed an issue where real devices sometimes couldn’t connect to the rigs.
Current State:
All features have been successfully implemented and are fully functional.
All features have been successfully implemented and are fully functional.
What's Left to Do:
In some cases, there is error while loading the information from the internet and showing no kml at all, and i will be fixing this by providing a sample kml and place in case of errors
In some cases, there is error while loading the information from the internet and showing no kml at all, and i will be fixing this by providing a sample kml and place in case of errors
Code Status:
The code is publicly available in the Liquid Galaxy project GitHub repository, along with all the necessary documentation.Challenges and Learnings:
So while building the application i went through several bugs and errors, the first one was keeping the accuracy of the information getting from the groq response, for that I used the Places api and it resolves mostly then secondly i went through this error of connecting the liquid galaxy rigs with my real device and that too i successfully resolved it, In this whole process i learned to how to resolve the problems breaking into pieces and fixing each and at the end the solution is ready.
Links to Github, documentation, and Play Store link.
Links to Github, documentation, and Play Store link.
Github: https://github.com/LiquidGalaxyLAB/LG-AI-fictional-travel-itinerary-generator documentation: Documentation https://github.com/LiquidGalaxyLAB/LG-AI-fictional-travel-itinerary-generator/blob/master/README.md
Final apk for tablets on drive: APK, or compile your own from code repo.
AI Fictional Travel Itinerary Generator, based on Gemini
Shiven UpadhyayA short description of the goals of the project
The Fictional Travel Itinerary Generator is an innovative Flutter application designed to transform travel planning through the integration of Generative Text AI and captivating visual storytelling. Built on the Gemini LLM, the app crafts immersive, personalized travel experiences by generating rich, fictional stories based on user-selected points of interest (POIs). These stories unfold paragraph-by-paragraph, exploring sub-POIs sequentially and presenting them through high-quality graphics and animations. Users can customize their experiences with a traveler’s personality and changes in their query, while the app's realistic speech to text capabilities, enhance accessibility and engagement. Featuring multi-language support, dark mode, and a modern UI with glass morphism and neon accents, the application offers interactive recommendations and tour suggestions, an interactive tool where you can customize your fictional travel, Your traveler’s persona will define the itinerary and that would ultimately cater to you. This project merges creativity with advanced technology to provide a unique and inspiring travel planning tool.
What you did
I developed an android application using flutter, Dart and Gemini API. The KML configuration and assimilation with the application using Gemini’s results for Liquid Galaxy Visualization. The app got tested at Liquid Galaxy Office, Spain. It includes the much needed Documentation.
The current State
All the features have been integrated with them successfully working except a couple extras which I’m working on.
What’s Left to do
Add Translation to the static elements of the Application and flutter TTS integration.
What Code got Merged (or not) upstream
The code is publicly available in the Liquid Galaxy project GitHub repository, along with all the necessary documentation.
Any challenges or important things you learned during the project.
The most challenging part was creating a docker that could work in any system and transition to other LLM sources. I was able to solve both problems with a little help from my mentor and other GSOC contributors.
Links to Github and Documentation.
GitHub: https://github.com/LiquidGalaxyLAB/LG-Gemma-AI-fictional-travel-itinerary-generator.git
Documentation: https://github.com/LiquidGalaxyLAB/LG-Gemma-AI-fictional-travel-itinerary-generator/blob/main/README.md
Final apk for tablets on drive: APK, or compile your own from code repo.
Oscar Pena
The primary goal of this project is to recode, test, and publish applications developed by contributors of the Liquid Galaxy community during the 2023-2024 period. This includes applications from GSoC Task 2 and the Flutter Kiss Contest. The objectives focus on reviewing and updating these existing applications to ensure they meet modern standards, followed by publishing them on the Google Play Store and uploading them to GitHub for broader accessibility. Additionally, for applications utilizing Docker-based AI models, testing will be conducted against the Liquid Galaxy system and AI servers.
What you did.
The project involved collecting the existing applications, testing them on tablets, and ensuring they functioned correctly before uploading them to the Play Store and GitHub. For apps that utilized AI models, tests were carried out on the Liquid Galaxy and Docker-based AI servers. Throughout this process, compatibility checks and improvements were made to ensure each app was fully compliant with the necessary standards and able to integrate seamlessly with the Liquid Galaxy ecosystem.
The current state.
The recoding and testing phases of the project are nearly complete, but there are still a few tasks remaining. Two applications are pending upload to the Play Store and GitHub, while two others are still under review and awaiting approval by the Play Store. Testing against Liquid Galaxy and AI servers has been successful so far, but final publication will be completed once the remaining apps are approved and uploaded.
What's left to do.
There are potential future enhancements, such as improving documentation, incorporating newer Liquid Galaxy features, and optimizing AI models for better performance. Additionally, if the apps are rejected, re-upload them with the necessary fixes to meet the requirements.
What code got merged (or not) upstream
The latest versions of some applications have been published after receiving feedback from testers. However, two applications are still pending upload, and two more are awaiting approval from the Play Store. Once these remaining apps are uploaded and approved, all changes and improvements will be fully merged, making the applications available for community use.
Any challenges or important things you learned during the project.
The most challenging aspects of this project involved learning the Play Store uploading process and thoroughly testing the applications to ensure they met Play Store requirements. Additionally, navigating the complexities of Docker-based AI integration required significant effort. These challenges provided valuable experience in testing, app validation, and managing the intricacies of the publication process.
Links to Github, documentation, and Play Store link.
Updated and new Manuals 2024: LINK
Github Repo of the Project: LINK
Github Repo of the Privacy Policies of the apps: LINK
PlayStore Links: (pending)
LG-Basic-Controller, LaserSlides, DiscoverAnimals, Gemini AI Touristic tool, Super Liquid Galaxy Controller, LG AI travel itinerary, LG AI fictional itinerary, LG Space Visualizations, LG Wildfire tracker,
LG AIS visualization
LG AI Touristic Explorer
Manas Dalvi
A short description of the goals of the project.
The LG AI Touristic Explorer is a Flutter application designed to offer an immersive exploration of cities. Its primary aim is to provide users with detailed insights into a city's historical, cultural, and geographical aspects, along with comprehensive information about various points of interest. The app can narrate stories about the city, enhancing the user experience with a deeper understanding of the location. It also features KML visualizations for select cities, multilingual support, and customizable color themes. The information is powered by Google's Gemini API, with narration handled by the Deepgram API.
What you did.
I developed a Flutter application with guidance from my mentor, focused on enhancing city exploration. I integrated Google's Gemini API to generate detailed city information and implemented the Deepgram API to enable narration within the app. Additionally, I added support for KML visualizations in select cities by gathering data on their outlines, historical maps, and more. The app's multilingual feature allows content generation in the user's chosen language.
The current state.
All the features of my application have been successfully implemented.
What's left to do.
The additional visualization options can be expanded further. And for instances where some points of interest lack images, I've added a placeholder image to ensure consistency.
What code got merged (or not) upstream
The code is publicly available in the Liquid Galaxy project GitHub repository, along with all the necessary documentation.
Any challenges or important things you learned during the project.
Throughout the project, I faced a few challenges, especially when it came to running and managing local LLMs. This required me to really dig into their architecture and how they work. Dockerizing the model was another tricky part, but it taught me a lot about containerization and how to handle dependencies and environments better. I also ran into some issues with the orbit functionality, particularly with SSH transfer.
Links to Github, documentation, and Play Store link.
GitHub: https://github.com/LiquidGalaxyLAB/LG-AI-Touristic-explorer
Documentation: https://github.com/LiquidGalaxyLAB/LG-AI-Touristic-explorer
Final apk for tablets on drive: APK, or compile your own from code repo.
A short description of the goals of the project.
The LG AI Touristic Explorer is a Flutter application designed to offer an immersive exploration of cities. Its primary aim is to provide users with detailed insights into a city's historical, cultural, and geographical aspects, along with comprehensive information about various points of interest. The app can narrate stories about the city, enhancing the user experience with a deeper understanding of the location. It also features KML visualizations for select cities, multilingual support, and customizable color themes. The information is powered by Google's Gemini API, with narration handled by the Deepgram API.
What you did.
I developed a Flutter application with guidance from my mentor, focused on enhancing city exploration. I integrated Google's Gemini API to generate detailed city information and implemented the Deepgram API to enable narration within the app. Additionally, I added support for KML visualizations in select cities by gathering data on their outlines, historical maps, and more. The app's multilingual feature allows content generation in the user's chosen language.
The current state.
All the features of my application have been successfully implemented.
What's left to do.
The additional visualization options can be expanded further. And for instances where some points of interest lack images, I've added a placeholder image to ensure consistency.
What code got merged (or not) upstream
The code is publicly available in the Liquid Galaxy project GitHub repository, along with all the necessary documentation.
Any challenges or important things you learned during the project.
Throughout the project, I faced a few challenges, especially when it came to running and managing local LLMs. This required me to really dig into their architecture and how they work. Dockerizing the model was another tricky part, but it taught me a lot about containerization and how to handle dependencies and environments better. I also ran into some issues with the orbit functionality, particularly with SSH transfer.
Links to Github, documentation, and Play Store link.
GitHub: https://github.com/LiquidGalaxyLAB/LG-AI-Touristic-explorer
Documentation: https://github.com/LiquidGalaxyLAB/LG-AI-Touristic-explorer
Final apk for tablets on drive: APK, or compile your own from code repo.
Mahinour Elsarky
A short description of the goals of the project:
This year, my project focused on creating an AI-powered touristic information tool for Liquid Galaxy. The tool, built as a Flutter-based Android tablet app, makes travel planning easier and more inspiring by helping users discover tailored points of interest worldwide. Whether it’s finding popular tourist spots, dining options, or shopping locations near you, or around the globe, the app uses the power of Generative AI through Google’s Gemma and Gemini models to provide personalized recommendations.
With Liquid Galaxy technology, users can visualize their entire trip across multiple screens, creating a panoramic experience. If a Liquid Galaxy rig isn’t available, the app still offers an immersive experience via integrated Google Maps.
Work Done:
The work was divided into four main areas:
- AI Integration: I integrated the Gemma model locally using Ollama, Langchain, and RAG (Retrieval Augmented Generation) and used the Gemini model via API.
- Dockerization: I created a Docker image for the local Gemma model, designed to run on a Rocky Linux instance, matching the setup of Liquid Galaxy’s AI server in Lleida Lab.
- Flutter App Development: I developed the Flutter app, incorporating the AI models and Google Maps for visualizations.
- Liquid Galaxy Visualizations: I implemented visualizations on the Liquid Galaxy rig using KMLs and SSH commands, providing users with an interactive experience.
Current state:
As of August 22, 2024, all key features and functionalities outlined in the project proposal have been successfully implemented. The only change was the switch from the Gemma model to Gemini.
What’s left to do:
While the main features are complete, future enhancements could include adding a chat function for users to ask about specific places and leveraging Gemini’s multimodality to allow users to upload images for location-based insights.
Challenges and Experience:
Throughout my GSoC journey, I encountered several challenges that significantly shaped my problem-solving abilities. One of the main challenges was the limitation of online resources suited to the complexity of my project, particularly when integrating multiple technologies. Often, I had to find my own solutions to address issues that lacked direct answers.
Integrating the Gemma model locally was particularly challenging. I had to optimize its use with Langchain, find the right prompting methods, and balance the model's extensive parameters to ensure it could run efficiently on my CPU/GPU. Additionally, I focused on reducing hallucinations, managing outdated results from the model, ensuring accurate and well-defined JSON outputs, and resolving parsing errors. Developing a suitable Docker image for deployment on the LG AI server in Lleida Lab was also a significant task. Handling streaming responses for the Flutter application added another layer of complexity, requiring careful design and implementation to ensure smooth data delivery.
The need to refactor the code from Gemma to Gemini due to response time issues added another layer of complexity. This transition required swift and careful adjustments to ensure the architecture was compatible with the unique behaviors of different LLMs.
Important Links:
Github Link: https://github.com/LiquidGalaxyLAB/LG-Gemma-AI-Touristic-info-tool/tree/main
A short description of the goals of the project:
This year, my project focused on creating an AI-powered touristic information tool for Liquid Galaxy. The tool, built as a Flutter-based Android tablet app, makes travel planning easier and more inspiring by helping users discover tailored points of interest worldwide. Whether it’s finding popular tourist spots, dining options, or shopping locations near you, or around the globe, the app uses the power of Generative AI through Google’s Gemma and Gemini models to provide personalized recommendations.
With Liquid Galaxy technology, users can visualize their entire trip across multiple screens, creating a panoramic experience. If a Liquid Galaxy rig isn’t available, the app still offers an immersive experience via integrated Google Maps.
Work Done:
The work was divided into four main areas:
- AI Integration: I integrated the Gemma model locally using Ollama, Langchain, and RAG (Retrieval Augmented Generation) and used the Gemini model via API.
- Dockerization: I created a Docker image for the local Gemma model, designed to run on a Rocky Linux instance, matching the setup of Liquid Galaxy’s AI server in Lleida Lab.
- Flutter App Development: I developed the Flutter app, incorporating the AI models and Google Maps for visualizations.
- Liquid Galaxy Visualizations: I implemented visualizations on the Liquid Galaxy rig using KMLs and SSH commands, providing users with an interactive experience.
Current state:
As of August 22, 2024, all key features and functionalities outlined in the project proposal have been successfully implemented. The only change was the switch from the Gemma model to Gemini.
What’s left to do:
While the main features are complete, future enhancements could include adding a chat function for users to ask about specific places and leveraging Gemini’s multimodality to allow users to upload images for location-based insights.
Challenges and Experience:
Throughout my GSoC journey, I encountered several challenges that significantly shaped my problem-solving abilities. One of the main challenges was the limitation of online resources suited to the complexity of my project, particularly when integrating multiple technologies. Often, I had to find my own solutions to address issues that lacked direct answers.
Integrating the Gemma model locally was particularly challenging. I had to optimize its use with Langchain, find the right prompting methods, and balance the model's extensive parameters to ensure it could run efficiently on my CPU/GPU. Additionally, I focused on reducing hallucinations, managing outdated results from the model, ensuring accurate and well-defined JSON outputs, and resolving parsing errors. Developing a suitable Docker image for deployment on the LG AI server in Lleida Lab was also a significant task. Handling streaming responses for the Flutter application added another layer of complexity, requiring careful design and implementation to ensure smooth data delivery.
The need to refactor the code from Gemma to Gemini due to response time issues added another layer of complexity. This transition required swift and careful adjustments to ensure the architecture was compatible with the unique behaviors of different LLMs.
Important Links:
Github Link: https://github.com/LiquidGalaxyLAB/LG-Gemma-AI-Touristic-info-tool/tree/main
Documentation Link:
Readme:https://github.com/LiquidGalaxyLAB/LG-Gemma-AI-Touristic-info-tool/blob/main/README.mdWiki: https://github.com/LiquidGalaxyLAB/LG-Gemma-AI-Touristic-info-tool/wiki
Dartdocs: https://liquidgalaxylab.github.io/LG-Gemma-AI-Touristic-info-tool/
Final apk for tablets on drive: APK, or compile your own from code repo.
Ryan Kim
- Short Description:
LG-AI-Voice-conversational is an API that allows users to integrate voice-to-voice features for Liquid Galaxy apps through various speech-to-text and text-to-speech models with customization to suit the app's needs better. With the latest surges in the field of AI, enabling this feature will allow Liquid Galaxy applications to integrate the newest capabilities into our projects.
The second part of my project was a Flutter app that used this voice-to-voice feature using Deepgram's STT and TTS models to showcase the full flow of the process to allow users to learn about location-based questions and get a response back with AI voices while seeing the visualization through Liquid Galaxy at the same time.
- What I did:
With the help of my mentors, I was able to create a Dockerized REST API using FastAPI to create endpoints for STT, TTS, and Groq for the voice-to-voice flow. The API was integrated with various model options and took into account various parameters, with documentation on how to configure the API keys and set everything up for the contributors to easily integrate into their applications.
For the Flutter application, the procedures followed a similar format to other apps such as setting up SSH to connect to LG and integrating a live stream feature for Deepgram's STT to allow users to talk into the mic and summarize the response and fetch the coordinates of the location via Groq and synthesize the response to voice via the TTS models by Deepgram. The location coordinates were then passed to a KML string to navigate LG to the specified location in the response and give a visual experience for the user while listening to the response.
- The current state:
on hold due to errors
- What's left to do:
I have recently updated the API documentation to better help contributors integrate this feature for future LG apps, and I plan to continue updating the Flutter application once I receive feedback and fix any errors present.
- What code got merged (or not) upstream:
The code is publicly available in the GitHub repository, along with all the necessary documentation.
- Any challenges or important things you learned during the project:
Working with various AI models was a challenge as the STT models had arising issues recording or sometimes understanding speech from users, which made it hard to avoid errors and I had to implement various error handling so users saw the exact issues on the app.
- Any challenges or important things you learned during the project:
One thing I wish I had done is to interact more with the community throughout the project, as although we saw each other in general meets, I think asking questions on Discord would have not only helped us bond more but we would have had opportunities to get help from other contributors and people on the Discord could benefit by viewing the certain things we were stuck on and learn as well.
- Links to Github, documentation, and Play Store link:
API GitHub Repo: https://github.com/LiquidGalaxyLAB/LG-AI-Voice-conversational
App GitHub Repo: https://github.com/LiquidGalaxyLAB/LG-AI-Voice-Flutter
API Docs GitHub Repo: https://github.com/LiquidGalaxyLAB/LG-AI-Voice-conversational/tree/main/docs
App Docs GitHub Repo: https://github.com/LiquidGalaxyLAB/LG-AI-Voice-Flutter/blob/main/README.md
- Short Description:
LG-AI-Voice-conversational is an API that allows users to integrate voice-to-voice features for Liquid Galaxy apps through various speech-to-text and text-to-speech models with customization to suit the app's needs better. With the latest surges in the field of AI, enabling this feature will allow Liquid Galaxy applications to integrate the newest capabilities into our projects.
The second part of my project was a Flutter app that used this voice-to-voice feature using Deepgram's STT and TTS models to showcase the full flow of the process to allow users to learn about location-based questions and get a response back with AI voices while seeing the visualization through Liquid Galaxy at the same time.
- What I did:
With the help of my mentors, I was able to create a Dockerized REST API using FastAPI to create endpoints for STT, TTS, and Groq for the voice-to-voice flow. The API was integrated with various model options and took into account various parameters, with documentation on how to configure the API keys and set everything up for the contributors to easily integrate into their applications.
For the Flutter application, the procedures followed a similar format to other apps such as setting up SSH to connect to LG and integrating a live stream feature for Deepgram's STT to allow users to talk into the mic and summarize the response and fetch the coordinates of the location via Groq and synthesize the response to voice via the TTS models by Deepgram. The location coordinates were then passed to a KML string to navigate LG to the specified location in the response and give a visual experience for the user while listening to the response.
- The current state:
on hold due to errors
- What's left to do:
I have recently updated the API documentation to better help contributors integrate this feature for future LG apps, and I plan to continue updating the Flutter application once I receive feedback and fix any errors present.
- What code got merged (or not) upstream:
The code is publicly available in the GitHub repository, along with all the necessary documentation.
- Any challenges or important things you learned during the project:
Working with various AI models was a challenge as the STT models had arising issues recording or sometimes understanding speech from users, which made it hard to avoid errors and I had to implement various error handling so users saw the exact issues on the app.
- Any challenges or important things you learned during the project:
One thing I wish I had done is to interact more with the community throughout the project, as although we saw each other in general meets, I think asking questions on Discord would have not only helped us bond more but we would have had opportunities to get help from other contributors and people on the Discord could benefit by viewing the certain things we were stuck on and learn as well.
- Links to Github, documentation, and Play Store link:
API GitHub Repo: https://github.com/LiquidGalaxyLAB/LG-AI-Voice-conversational
App GitHub Repo: https://github.com/LiquidGalaxyLAB/LG-AI-Voice-Flutter
API Docs GitHub Repo: https://github.com/LiquidGalaxyLAB/LG-AI-Voice-conversational/tree/main/docs
App Docs GitHub Repo: https://github.com/LiquidGalaxyLAB/LG-AI-Voice-Flutter/blob/main/README.md
Rofayda Bassem
A short description of the goals of the project.
LG AIS Visualization is an app designed to stream real-time AIS data from the Norwegian Coastal Administration, visualizing maritime activities on Google Maps and Liquid Galaxy rigs. It provides features like real-time vessel tracking, detailed vessel information, historical route playback, future route prediction using machine learning models, and collision risk management through advanced calculation methods. Optimized for Liquid Galaxy systems, the app allows seamless synchronization of maps between the rig and the app, while also functioning standalone to provide users with clear insights for enhanced real-time monitoring and analysis of vessel activities.
What you did.
I developed the AIS Visualization app using Dart and Flutter, focusing on real-time tracking and visualization of maritime activities. I integrated the AIS API from the Norwegian Coastal Administration to stream data, which involved handling endpoints for vessel positions, detailed vessel information, and historical route data. I implemented key features such as real-time vessel tracking, historical route playback, future route prediction using machine learning models, and collision risk management using advanced calculation methods. To ensure a smooth user experience, I worked on optimizing the loading and processing of the large streamed datasets, enhancing the app's performance even with high volumes of incoming data.
The current state.
I have finished developing all the features of the AIS Visualization app. The app has been thoroughly tested on Liquid Galaxy Labs, both on real tablets and rigs, ensuring it functions seamlessly across all devices. It has also been successfully published on the App Store, making it accessible for users to download and use.
What's left to do.
All core functionalities of the AIS Visualization app are complete, and the app is fully operational. However, I plan to further enhance the prediction model and collision risk management features to improve their accuracy and efficiency.
What code got merged (or not) upstream
The code is publicly available in the Liquid Galaxy project GitHub repository, along with all the necessary documentation.
Any challenges or important things you learned during the project.
During the project, I faced significant challenges with lag and performance issues. Initially, the app struggled due to the large number of vessels being rendered in real-time on Google Maps, and the long KML files being sent to Liquid Galaxy, which were updated with each new vessel location from the streamed API. This caused noticeable lag and impacted the app's responsiveness.
To address these issues, I focused on performance enhancements and optimizations. I improved the efficiency of data handling and rendering processes, ensuring smoother operation even with extensive data streaming and frequent updates. This experience taught me valuable lessons in optimizing real-time data visualization and managing large datasets effectively.
Links to: Github, documentation, and Play Store link.
GitHub: https://github.com/LiquidGalaxyLAB/LG-Ship-Automatic-Identification-System-visualization
Documentation: https://github.com/LiquidGalaxyLAB/LG-Ship-Automatic-Identification-System-visualization/blob/main/README.md
Final apk for tablets on drive: APK, or compile your own from code repo.
A short description of the goals of the project.
LG AIS Visualization is an app designed to stream real-time AIS data from the Norwegian Coastal Administration, visualizing maritime activities on Google Maps and Liquid Galaxy rigs. It provides features like real-time vessel tracking, detailed vessel information, historical route playback, future route prediction using machine learning models, and collision risk management through advanced calculation methods. Optimized for Liquid Galaxy systems, the app allows seamless synchronization of maps between the rig and the app, while also functioning standalone to provide users with clear insights for enhanced real-time monitoring and analysis of vessel activities.
What you did.
I developed the AIS Visualization app using Dart and Flutter, focusing on real-time tracking and visualization of maritime activities. I integrated the AIS API from the Norwegian Coastal Administration to stream data, which involved handling endpoints for vessel positions, detailed vessel information, and historical route data. I implemented key features such as real-time vessel tracking, historical route playback, future route prediction using machine learning models, and collision risk management using advanced calculation methods. To ensure a smooth user experience, I worked on optimizing the loading and processing of the large streamed datasets, enhancing the app's performance even with high volumes of incoming data.
The current state.
I have finished developing all the features of the AIS Visualization app. The app has been thoroughly tested on Liquid Galaxy Labs, both on real tablets and rigs, ensuring it functions seamlessly across all devices. It has also been successfully published on the App Store, making it accessible for users to download and use.
What's left to do.
All core functionalities of the AIS Visualization app are complete, and the app is fully operational. However, I plan to further enhance the prediction model and collision risk management features to improve their accuracy and efficiency.
What code got merged (or not) upstream
The code is publicly available in the Liquid Galaxy project GitHub repository, along with all the necessary documentation.
Any challenges or important things you learned during the project.
During the project, I faced significant challenges with lag and performance issues. Initially, the app struggled due to the large number of vessels being rendered in real-time on Google Maps, and the long KML files being sent to Liquid Galaxy, which were updated with each new vessel location from the streamed API. This caused noticeable lag and impacted the app's responsiveness.
To address these issues, I focused on performance enhancements and optimizations. I improved the efficiency of data handling and rendering processes, ensuring smoother operation even with extensive data streaming and frequent updates. This experience taught me valuable lessons in optimizing real-time data visualization and managing large datasets effectively.
Links to: Github, documentation, and Play Store link.
GitHub: https://github.com/LiquidGalaxyLAB/LG-Ship-Automatic-Identification-System-visualization
Documentation: https://github.com/LiquidGalaxyLAB/LG-Ship-Automatic-Identification-System-visualization/blob/main/README.md
Final apk for tablets on drive: APK, or compile your own from code repo.