Embedded Demo
Live Demo Experience
You can try the AI assistant that processes voice and text banking commands directly on this page.
Live Space
The demo panel is presented as a polished embedded surface so the user experience feels native to the site instead of a hard redirect.
Preparing the demo
This layer will disappear once the embedded interface loads. If the frame takes longer than expected, you can still open the demo directly on Hugging Face.
Demo Notes
Short but useful context around the live panel
A compact set of info cards sits under the embedded space so the experience reads like a product demo, not only an iframe wrapper.
An assistant flow for Turkish banking commands
It receives voice or text commands, predicts the relevant intent, and moves toward the right response output.
Voice command input and text command processing
Microphone usage and text prompts are both supported within the same interaction flow.
Whisper, Transformers, Gradio, and PyTorch
Speech-to-text, intent modeling, and output generation work together as one connected system.
External demo and model resources
If needed, you can jump directly to Hugging Face for the live Space or inspect the model page separately.
External Links
Direct access to the source surfaces
Alongside the embedded experience, there is still a compact route to the original Hugging Face demo and the model page.
Open on Hugging Face
Open the Space interface directly in a new tab if you prefer the original hosting surface.
Model page
Visit the model card for the technical context, task framing, and training-facing presentation details.