Geovision AI, Anishka Yadav
A short description of the goals of the project. GeoVisionAI is a web-based application that allows users to ask geography-related questions (either by typing or by voice), which are answered by a Gemini AI model. The system identifies locations, generates a story, and visualizes the response in Liquid Galaxy using KML balloons, camera fly-to and orbit animations.
What you did. I developed a Progressive Web App (PWA) using HTML, CSS, JavaScript, Web Components and Material 3 as the UI Library. I’ve used OpenCage and Freesound API along with Google Gemini 1.5 Flash as the AI model for the response to the user’s queries. I also generated a dynamic KML every time a new query is given by the user for the dynamic geospatial visualization on to the Liquid Galaxy. I also implemented text-to-speech and playing of relevant soundscapes functionality for the interactive and immersive user experience.
The current state. The project is successfully completed and is published to the Liquid Galaxy GO-Web-Store.
What's left to do. In some cases, the orbit is not starting at the first click and also sometimes it takes a longer time to start. Also, the Web Speech API is not working reliably on mobile devices. I am taking help from my mentor to fix these issues.
What code got merged (or not) upstream. My project is fully developed and is published on the Liquid Galaxy GO-Web-Store.
Any challenges or important things you learned during the project. I faced a lot of challenges during the development of my project like the dynamic KML was not being generated and then it was not being visualized on LG properly. I also had problems in enhancing the User Experience for better understanding of the KML visualization on the LG. The orbit controls were not working as expected. Lastly, the fallback image was always being shown on the LG for the direct query commands.
Links to: Github, documentation, and GO Store link.
Github repo: https://github.com/LiquidGalaxyLAB/LG-GeoVisionAI
GO Store URL: https://store.liquidgalaxy.eu/index.html?app=GeoVisionAI
LG Airport Controller Simulator, Dev T Gadani
A short description of the goals of the project. The goal of an airport controller is to guide air traffic, prevent airplanes from overlapping or colliding, and ensure they take off and land safely at the airport.
What you did. I developed a Node.js server to handle all communication and a Chromium-based client using plain JavaScript and HTML to display airplane animations. I also built a Flutter application that works as the controller, allowing start, stop, pause, and restart of both the simulation and the server. The controller includes a text-to-speech feature that announces airplane commands, with options to adjust voice settings.
The current state. Project finished and latest code on github.
What's left to do. Nothing.
What code got merged (or not) upstream. My project is fully developed and published on the Liquid Galaxy Go Store. My project is still not finished and so not merged.
Any challenges or important things you learned during the project. I faced challenges in handling port communication, avoiding race conditions, and managing airplanes on the grid to ensure accurate path alignment.
Links to: Github, documentation, and GO Store link.
Github repo: https://github.com/LiquidGalaxyLAB/LG-Airport-Controller-Simulator
Go Store url : https://store.liquidgalaxy.eu/index.html?app=LG%20Airport%20Controller%20Simulator
Voice-Driven Robot Goal Assignment System with Liquid Galaxy, Javier Mancho
A short description of the goals of the project. The project aims to enable the control of robots distributed across different areas through voice commands. Spoken instructions are converted into structured representations to compute the optimal path. Finally, Liquid Galaxy visualises the robots, their playzone, and the calculated path.
What you did. The project is built upon a client-server architecture designed to enable natural and intuitive interaction between users and autonomous robots. The client is implemented as a mobile application, which serves as the primary interface for end-users. Through this application, users are able to select robots and send voice commands. On the server side, the system operates as the core processing unit responsible for transforming user inputs into actionable instructions for the robot. The first stage of this process involves the application of a Speech-to-Text model, which transcribes the spoken command into written text with high accuracy. Once the instruction is available in textual form, it is processed by a fine-tuned Gemma 2 model, which is specifically trained to translate natural language expressions into a formal representation based on the Planning Domain Definition Language (PDDL). Subsequently, the Fast Downward planner is employed to generate an optimal plan that outlines the sequence of actions required to fulfil the given instruction. The plan describes the route or strategy that the robot must follow in order to achieve the goal defined by the user. Once the plan has been successfully generated, it is transmitted to a dedicated ROS2 publishing module. This module is responsible for analysing each action in the plan and computing the necessary motion parameters, including the direction, the distance to be covered, and the estimated execution time. Based on these calculations, the module issues the corresponding velocity commands by publishing linear_x and angular_z values to the /cmd_vel topic. The system integrates with Liquid Galaxy to provide a visual representation of the robot and its environment. The platform displays the robot’s status, its operational areas, and, once a plan is generated, the corresponding route together with the sequence of actions. A legend complements the visualisation, ensuring that the user can easily interpret the steps and monitor execution in real time.
The current state. The project has been fully completed and published in the Liquid Galaxy GO Web Store.
What's left to do. Although the project has been completed and demonstrates the full pipeline from voice commands to automated planning and visualisation, it has not yet been possible to validate the system on a real robot. A natural continuation would be to integrate the framework with the AMIGA robot to evaluate its performance in real-world conditions and confirm the effectiveness of the execution phase. Furthermore, the project could be extended by incorporating a Gazebo simulation displayed in Liquid Galaxy, providing a more immersive representation of the robot’s environment, as well as a real-time telemetry dashboard, enabling users to monitor system metrics and robot status during operation.
What code got merged (or not) upstream. My project is fully developed and published on the Liquid Galaxy Go-Web-Store.
Any challenges or important things you learned during the project. During the development of this project, several challenges were encountered that required significant effort to overcome. One of the first difficulties was deploying the Docker container on the server. The container initially presented connectivity issues when attempting to access the internet, which complicated the setup and required multiple iterations of configuration and troubleshooting before achieving a stable deployment. Another major challenge was related to the integration of ROS2, ROSPlan, and Gazebo. As these frameworks were not deeply familiar beforehand, considerable time was spent understanding their internal workings and ensuring compatibility. Finding the appropriate versions of each component proved to be particularly complex, since even minor mismatches could lead to failures in communication or execution. Finally, the LoRA fine-tuning of the Gemma 2 model also posed a significant challenge. To enable the model to translate natural language commands into PDDL predicates, a synthetic dataset had to be created, mapping voice instructions into their corresponding logical representations. This required careful design to guarantee that the dataset captured enough variability while maintaining consistency in structure. Moreover, the training process itself was computationally demanding, with long training times that slowed down experimentation and validation.
Links to: Github, documentation, and Play Store link.
Github Repo: https://github.com/LiquidGalaxyLAB/LG-Voice-Driven-Robot-Goal-Assignment-System
Go Store URL: https://store.liquidgalaxy.eu/index.html?app=Voice%20Driven%20Robots
Eco Explorer, Shuvam Swapnil Dash
A short description of the goals of the project. Eco Explorer for Liquid Galaxy provides a master interface that showcases a dashboard of various expansive forests around the globe and allows users to explore them in depth. The application offers features like Virtual Tour, Biodiversity Display, Graphical Visualizations, and Catastrophe Status of the forests. It works with the Liquid Galaxy rig to provide an immersive and educational experience to your virtual forest discovery.
What you did. I have developed a Flutter/Dart application for 6.x inch Android devices from scratch, which synchronizes the forest data to the Liquid Galaxy rig using SSH and KMLs. It uses Groq’s inference + Deepgram Text-To-Speech for creating a virtual tour guide and is supported by REST APIs like NASA FIRMS API and OpenWeather API to display forest fires and air quality of the forests, respectively.
The current state. As it stands, all the proposed functionalities of the application have been successfully implemented.
What's left to do. The functionalities have been deployed, but there is a scope of expanding the forest database in future releases.
What code got merged (or not) upstream. My project is fully developed and published on the Liquid Galaxy Go Web Store.
Any challenges or important things you learned during the project. My first challenge was to collect a huge amount of data, like boundary KMLs, geographical and biodiversity data of the forests, by surfing around various internet sources, which required a lot of patience, precision, and careful filtering of relevant data. Another major challenge I dealt with was working on Ubuntu terminal commands and executing them in proper order through SSH in order to get the best possible KML output combinations. With the help of my mentor, Gabry Sir, I learned a lot about working with remote servers.
Links to: Github, documentation, and Play Store link.
Github repo: https://github.com/LiquidGalaxyLAB/Eco-Explorer
Go Store url sample: https://store.liquidgalaxy.eu/index.html?app=Eco%20Explorer%20for%20Liquid%20Galaxy
Robotics Simulation with Gazebo, Debanjan Naskar
A short description of the goals of the project. The objective of this project is to create a Flutter-based mobile app that allows users to remotely operate and simulate robotic systems on any Liquid Galaxy rig. The user can see real-time robotic camera feeds spread over multiple synchronized screens and from a single, easy-to-use interface. By combining mobile control with the immersive environment of Liquid Galaxy, this project would create a visually stimulating platform for robotics testing, demonstrations, and research.
What you did. My project had three parts of development:
- A Flutter app to control the Robots and communicate with LG Rig. This app includes the following important components:
- A Controls Page which sends commands through websocket to the server running Robotics Simulations.
- A SSH connection page which helps connecting to existing LG Rigs by either scanning the QR or entering the details manually.
- A Camera Control and LG Control pages which sends appropriate actions commands to the LG Rig and Video Server.
- Two robot packages ready to be simulated and controlled in the LG Rig.
- SO Arm 100 - An open source robotic arm which can also perform demo action in the simulation.
- Amigabot Farm-ng - A movable platform to serve as a powerful agricultural robot.
- Docker Builds and Images to be run in a Server to run the Robot Simulation.
- A standard Dockerfile for building ROS2 package images.
- A Flask Server image that streams videos.
- Docker Compose files and a Bash Script binding all docker images with correct configurations and providing an one command entry to the server startup.
The current state. All of these three parts of the projects are completely developed and functional. It is 100% operational.
What's left to do. Handle any upcoming bugs in the robot simulation and debug them. Further, any robot or demo can be added to the repository. This project will serve as a structural framework and basis of any future GSoC project that aims to create robotic simulations for Liquid Galaxy.
What code got merged (or not) upstream. My project is fully developed and published on the Liquid Galaxy Go Store.
Any challenges or important things you learned during the project. This project provided me with my first end-to-end product development experience. I was able to blend my existing skills in robotics simulations with new skills in Flutter, Docker, and Flask, none of which I had prior to starting this project. It also significantly improved my overall robotics skills. One challenge I faced was running the Docker server on a dedicated GPU, which was later resolved by providing more thorough documentation for the setup process. Overall, this was a valuable learning experience.
Links to: Github, documentation, and Play Store link.
Github repo: https://github.com/LiquidGalaxyLAB/LG-Robotics-Simulation-with-Gazebo
Documentation: https://github.com/LiquidGalaxyLAB/LG-Robotics-Simulation-with-Gazebo/blob/lg-robotics-docker-server/README.md
Go Store url sample: https://store.liquidgalaxy.eu/index.html?app=RoboSimLG
DATA Spaces Visualization for Liquid Galaxy, Marc Bañeres Farrán
A short description of the goals of the project. DATA Spaces Visualization for Liquid Galaxy is a Flutter application that connects to multiple FIWARE Data Spaces to visualize urban mobility, environmental, and agricultural data in an immersive environment. The app integrates three key data spaces: EMT Málaga's bus service for real-time bus tracking, Santander's environmental sensors for city monitoring, and Lleida's pig farms for agricultural analytics, providing synchronized visualization through the Liquid Galaxy.
What you did. I developed a Flutter/Dart application that integrates with multiple FIWARE Data Spaces, implementing real-time data synchronization with 5-minute refresh cycles for buses and 1-minute updates for environmental sensors. The app features robust SSH/SFTP communication with Liquid Galaxy, including automatic reconnection and per-screen refresh strategies. I implemented QR-based setup, synchronized map-LG navigation, and comprehensive system controls. Special attention was given to accessibility with a colorblind mode affecting both UI and KML generation. The application includes filtering capabilities, detailed information cards, and trail visualization for historical data.
The current state. The project is fully complete with all core features implemented and thoroughly tested. All three data space visualizations are functional, real-time synchronization between the app and LG is working seamlessly, and the system control interface is operational.
What's left to do. The application could be expanded to include more FIWARE Data Spaces sources.
What code got merged (or not) upstream. My project is fully developed and published on the Liquid Galaxy Go Store.
Any challenges or important things you learned during the project. The main challenge was handling real-time data synchronization between multiple data sources and the Liquid Galaxy. Working with FIWARE Data Spaces taught me a lot about data normalization and real-time updates. Additionally, developing the colorblind mode required careful consideration of accessibility standards to ensure the visualization was effective for all users.
Links to: Github, documentation, and Go Store.
Github repo: https://github.com/LiquidGalaxyLAB/DATA-Spaces-for-LG-Frontend
Documentation: https://drive.google.com/file/d/16ZMXKJpAwIRN1RG3q1oxYQ8XjbMqj7vA/view?usp=sharing
Go Store url: https://store.liquidgalaxy.eu/index.html?app=DATA%20Spaces%20Visualization%20LG
LG Master Web App, LucÃa F. Giner
A short description of the goals of the project. The goal of this project was to make it as easy as possible for future contributors to achieve the goals of their projects. Some of them may not be familiarized with how Flutter or GSoC works, so the goal of this project is to make the GSoC experience as easy as possible for them.
What you did. What I did is a project where documentation is as important as the code itself. The code part consisted in a simple application that showed the minimum and mandatory screens that every GSoC application should have, along with some optional features that contributors may want to implement on their projects (for example, an AI chat or interaction with a Node.js server) but don’t know how to. Every single code file in this repository (the files related to the user interface, the files related to dependencies, the files related to logic, etc.) is commented and extensively explained so that the contributors can understand exactly what each line does (for example, even the Linux commands for the Liquid Galaxy are explained part by part). That way, contributors will have a way of checking what every part of the code should do, all in the same place. The README provided alongside this code is also incredibly long, but this has a reason. It not only gives more information about what every screen does and how the GSoC project should be organized, but also about how the whole GSoC project works so that they have information about meetings, mentors, common problems, etc. from the very first moment. In order to do this, I have researched past projects (to get information about the minimum and mandatory screens and about how the code works) and have talked with different students to develop the README so that it is as complete as possible and solves all the doubts that can come up during the GSoC experience.
The current state. Project finished (both code and documentation) and also published on the Liquid Galaxy Go Webstore.
What's left to do. Nothing.
What code got merged (or not) upstream. My project is fully developed and published on the Liquid Galaxy Go Store.
Any challenges or important things you learned during the project. Since I had never worked with Liquid Galaxy before, I had a lot of problems with the connection functions, because at first I didn’t really understand the logic behind it so I really didn’t know how to solve it. However, I was finally able to solve it, and that makes me happy because, thanks to all these problems, I could make my explanations about connecting to the Liquid Galaxy even more complete, as this way I was able to explain exactly what information would have helped me when problems started to surface. Every problem I encountered during the development was an opportunity to provide an even more complete explanation for future contributors so that they wouldn’t find the same problems I did.
Links to: Github, documentation, and Play Store link.
Github repo: https://github.com/LiquidGalaxyLAB/LG-Master-Web-App
Go Store url sample: https://store.liquidgalaxy.eu/index.html?app=LG%20Master%20Web%20App
LG RoboStream, Alejandro Bernaldo
A short description of the goals of the project. The goal of this project was to develop an app that could connect to a real robot. The app extracts data from the robot and sends the information selected by the user to the Liquid Galaxy screens, enabling real-time visualization of the robot’s data.
What you did. I built a system with an interface that allows the integration of communication components with other types of robots that use ROS. Everything is prepared to work with a real robot. The app is fully developed and completed, and the project also includes a server that manages all communications between the app, the Liquid Galaxy, and the robot. The server is ready for the robot’s deployment.
The current state. 93%
What's left to do. Try with a real robot.
What code got merged (or not) upstream. My project is fully developed and published on the Liquid Galaxy Go Store.
Any challenges or important things you learned during the project. When I started the project, I knew only a little about programming in Flutter and Python. I had never used Docker and didn’t know how it worked. During the development of this project, I learned how to program in Flutter and Python, understood how Docker works, and gained experience building a server inside Docker.
Links to: Github, documentation, and Play Store link.
Github repo: https://github.com/LiquidGalaxyLAB/LG-RoboStream
Go Store url sample: https://store.liquidgalaxy.eu/index.html?app=RoboStream
Martian Climate Dashboard, Mohit Sharma
A short description of the goals of the project. Martian Climate Dashboard visualizes historical Mars climate data—temperature, pressure, density, etc —letting users pick timeframes, interpolating data, generating KML, and streaming to Liquid Galaxy for immersive 3D Visualization.
What you did. I developed a Flutter-based Martian Climate Dashboard that fetches Mars Climate Database data, generates KML, and sends it to Liquid Galaxy; it also supports natural-language Q&A about the visuals via LLMs.
The current state. The project has been fully completed and published in the Liquid Galaxy GO Web Store.
What's left to do. KML generation takes a lot of time, but if it’s downgraded to reduce the time, its quality is compromised. A way to maintain both quality and generation speed is needed.
What code got merged (or not) upstream. My project is fully developed and published on the Liquid Galaxy Go Store.
Any challenges or important things you learned during the project. Some Mars weather data was spotty or messy, so I cleaned it up, and filled gaps by estimating between points. Big map files made loading slow, so I split them into smaller pieces and reused cached bits. After reboots, switching the rig back to “Mars mode” often broke, so I added automatic checks and retries. During this project I learned about the importance of testing.
Links to: Github, documentation, and Go Store link.
Github repo: https://github.com/LiquidGalaxyLAB/Martian-Climate-Dashboard
Go Store URL: https://store.liquidgalaxy.eu/app.html?name=Martian%20Climate%20Dashboard
Open Buildings Visualizer, Jaivardhan Shukla
A short description of the goals of the project. The goal of this project is to develop a Flutter-based mobile application that brings Google’s Open Buildings dataset to Liquid Galaxy. Users can tap any location on the synchronized map to view building footprints, metadata, and confidence scores.Interactive 3D visualizations allow exploration of regions or individual buildings with immersive tours and analysis.
What you did. I developed the LG Building Footprint Visualization app using Flutter, Dart, SSH, and KML with Google’s Open Buildings API. The app supports both grid-based and free navigation, enabling users to explore preset landmarks or freely move across the map to select regions for visualization. Building footprints are dynamically retrieved with metadata, allowing both regional clusters and individual buildings to be visualized in 3D, along with detailed analytics such as confidence scores and area measurements. Liquid Galaxy integration delivers synchronized multi-screen display, progressive 3D rendering, real-time tours around regions or single buildings, and fully interactive exploration.
The current state. The project has been successfully completed and published on the Liquid Galaxy Go-Webstore, with all proposed functionalities fully implemented and deployed. The system is stable and functional.
What's left to do. Minor refinements to tour playback reliability - occasionally tours fail to start on first iteration and require a second attempt. Camera angle optimization for individual preset locations to ensure consistent viewing experience.
What code got merged (or not) upstream. My Project is fully Developed and published on Go-Webstore.
Any challenges or important things you learned during the project. One of the key challenges was generating complex KML structures for 3D building visualization while keeping performance smooth. To achieve this, I implemented progressive loading — sending markers first for instant feedback, then gradually enhancing them with 3D geometry in batches. Another challenge was maintaining real-time synchronization between the mobile app, SSH connections, and Liquid Galaxy displays, which highlighted the importance of robust error handling and stable communication. This project provided me invaluable experience in building immersive geospatial visualization systems, combining mobile interaction with large-scale multi-screen environments. It not only deepened my skills in Flutter, SSH integration, and KML generation but also strengthened my ability to design scalable, interactive tools that transform raw geospatial data into engaging visual experiences.
Links to: Github, documentation, and Go Store link.
Github repo: https://github.com/LiquidGalaxyLAB/Google-Research-Open-Buildings-Data-Visualization
Go Store url: https://store.liquidgalaxy.eu/index.html?app=Open%20Buildings%20Visualizer
Catastrophe Visualizer, Sanya Singh
A short description of the goals of the project. The project aim is to fetch disaster from api like USGS and make disaster model that filters data builds ssh connection that clusters for 3D Earth visualization and generate kml files for geographic representation.
What you did. Provider Pattern State Management: Implemented robust state management using Flutter Provider for real-time data handling. Modular Service Architecture: Created separate services for API communication, Liquid Galaxy integration, and KML generation. Persistent Storage: Developed shared preferences system for LG connection setting.
The current state. The application is nearly complete with all major features implemented and functional. The core disaster monitoring, classification, and Liquid Galaxy integration systems are fully operational except the kml generation is giving some issues.
What's left to do. Fix the kml error and regulate proper generation in liquid galaxy system.
What code got merged (or not) upstream. my project is still not finished and so not merged.
Any challenges or important things you learned during the project. I faced a lot of challenges during the development of my project like the dynamic KML was not being generated and then it was not being visualized on LG properly. I also had problems in enhancing the User Experience for better understanding of the KML visualization on the LG.
Links to: Github, documentation, and Go Store link.
Github repo: https://github.com/LiquidGalaxyLAB/Catastrophe-Visualizer