- Arnau Claramunt
- Josep Díaz
- Genís López
- Pay Mayench
This project is our implementation of the Know your CUPRA challenge, presented at the HackUPC 2025 Hackathon, by the CUPRA | Seat company.
The project consists on a groundbreaking digital twin representation of the Cupra Tavascan 2024 car so that the customers of the model are able to learn all its cutting-edge functionalities without reading the manual. This allows them to enjoy a full exploitation of the car features before they even get the car.
This digital twin includes many state of the art functionalities, such as a 3D model representation of the car integrated with a user-friendly tutorial. In addition to the tutorial, voice interactive Generative AI agent is able to answer any user question based on the oficial car user manual using a RAG strategy. Moreover, we have implemented a electroencephalograpic (controlled with the mind) driving simulator for the Tavascan.
| Component | Technology | Purpose |
|---|---|---|
| Visualization | Unity | Creates a 3D visualization of the car and scene. |
| Hardware | Muse Headband | Device to understand the brain signals. |
| AI | Gemini 2.0 Flash | Voice interactive generative AI assistant. |
| API Layer | FastAPI | Communication between Muse Headband and Unity. |
| Integration | Docker | Compatibility of the AI assistant from Linux to Windows. |
- Interactive user-friendly digital twin to showcase all car features.
When we first open the application, we are presented with a realtime 3D animation of the Cupra Tavascan welcoming the software. This 3D model is the main focus in the application, since it is the digital-twin that will approach the car to the user before even having it, noticing no differences once you see the physical car. A "Know more" button makes the user eager to feel what is like having a Cupra Tavascan after that cinematic introduction, wich when pressed, leads us to a new scene where we can discover every single one of the features that the car offers to us. A "Learn about..." menu pops up, revealing the possibility to go through 6 different visually and voice assisted tours that guide the user through all the amazing features, without needing to read any manual. Some of this tours have the possibility of you to try out a feature, for example, the car acceleration and braking system, in a physically accurate simulation. In every moment, you have the possibility to ask the virtual assistant any question you might have about the car by pressing a button and start talking. The technologies used for this are Unity3D and C#, using every single feature about Unity: Animations, Lighting, Physics, Interactive UI and integration with all the other project parts.
-
Voice interactive AI assistant: it knows all about the user manual and answer the questions the customer has about the car using:
- Gemini 2.0 Flash
- Embeddings (RAG strategy)
- Voice processing
- Pipeline: Record user audio -> precess audio and answer based on the manual -> output the response audio and transcript to text
-
Mind controlled physically accurate driving simulation to showcase the acceleration and brakes of the car.
The EEG component is implemented using a Muse EEG headband. Although this device is primarily marketed to help users meditate by monitoring their brainwaves, it also provides real-time measurements of alpha and beta wave activity. Since higher beta wave levels generally correspond to increased mental activity, we can infer whether a person is focused or relaxed.
The headband uses four electrodes positioned near the frontal cortex to capture raw EEG signals. We acquired these signals in Python, then filtered out noise via a Fast Fourier Transform (FFT) to isolate the relevant frequency bands.
Next, we assembled a dataset from the cleaned EEG readings and manually labeled each sampling period as “focused” or “relaxed.” Using this labeled data, we trained a Random Forest classifier to predict mental state from EEG features.
Finally, we integrated the classifier into a simple visualization tool: an orange cube whose behavior—accelerating or braking a simulated car—is controlled by the model’s predictions. We exposed the classifier via a Flask API, allowing the Unity-based simulation to update the car’s speed in real time based on the user’s mental state.
Kinematic equations with the real values of power, torque, acceleration curves, mass etc. are used to build an unprecedented simulator of the acceleraion and braking phases of the car in the most realistic way in Unity3D using the C# programming language. The best of all: it is controlled with the mind through the brain signals explained before. We know equally that, on the one hand, when you buy a Cupra you simply do not look for a regular car, you want the best confort and performance, as well as, on the other hand, the Tavascan represents the most sophisticated and cutting-edge mark of the Brand. This simulation aims to make even the most strict enthusiast fall in love with the car in a way he/she Will not resist it: by being the car and feeling it and its Sporting capabilities.








